Apr 24 00:35:08.612010 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 23 22:08:58 -00 2026 Apr 24 00:35:08.612032 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 00:35:08.612042 kernel: BIOS-provided physical RAM map: Apr 24 00:35:08.612048 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 24 00:35:08.612053 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 24 00:35:08.612057 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 24 00:35:08.612063 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 24 00:35:08.612067 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 24 00:35:08.612072 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 24 00:35:08.612076 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 24 00:35:08.612081 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 24 00:35:08.612085 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 24 00:35:08.612091 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 24 00:35:08.612096 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 24 00:35:08.612102 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 24 00:35:08.612107 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 24 00:35:08.612111 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 24 00:35:08.612118 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 24 00:35:08.612122 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 24 00:35:08.612127 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 24 00:35:08.612132 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 24 00:35:08.612137 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 24 00:35:08.612141 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 24 00:35:08.612146 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 00:35:08.612150 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 24 00:35:08.612155 kernel: NX (Execute Disable) protection: active Apr 24 00:35:08.612160 kernel: APIC: Static calls initialized Apr 24 00:35:08.612165 kernel: e820: update [mem 0x9b31e018-0x9b327c57] usable ==> usable Apr 24 00:35:08.612171 kernel: e820: update [mem 0x9b2e1018-0x9b31de57] usable ==> usable Apr 24 00:35:08.612176 kernel: extended physical RAM map: Apr 24 00:35:08.612180 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 24 00:35:08.612185 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 24 00:35:08.612190 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 24 00:35:08.612195 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 24 00:35:08.612200 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 24 00:35:08.612204 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 24 00:35:08.612209 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 24 00:35:08.612214 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e1017] usable Apr 24 00:35:08.612219 kernel: reserve setup_data: [mem 0x000000009b2e1018-0x000000009b31de57] usable Apr 24 00:35:08.612225 kernel: reserve setup_data: [mem 0x000000009b31de58-0x000000009b31e017] usable Apr 24 00:35:08.612232 kernel: reserve setup_data: [mem 0x000000009b31e018-0x000000009b327c57] usable Apr 24 00:35:08.612237 kernel: reserve setup_data: [mem 0x000000009b327c58-0x000000009bd3efff] usable Apr 24 00:35:08.612242 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 24 00:35:08.612247 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 24 00:35:08.612254 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 24 00:35:08.612259 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 24 00:35:08.612264 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 24 00:35:08.612269 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 24 00:35:08.612274 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 24 00:35:08.612279 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 24 00:35:08.612284 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 24 00:35:08.612289 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 24 00:35:08.612294 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 24 00:35:08.612299 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 24 00:35:08.612304 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 00:35:08.612310 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 24 00:35:08.612315 kernel: efi: EFI v2.7 by EDK II Apr 24 00:35:08.612452 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Apr 24 00:35:08.612457 kernel: random: crng init done Apr 24 00:35:08.612463 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 24 00:35:08.612468 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 24 00:35:08.612473 kernel: secureboot: Secure boot disabled Apr 24 00:35:08.612478 kernel: SMBIOS 2.8 present. Apr 24 00:35:08.612483 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 24 00:35:08.612488 kernel: DMI: Memory slots populated: 1/1 Apr 24 00:35:08.612493 kernel: Hypervisor detected: KVM Apr 24 00:35:08.612498 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 24 00:35:08.612505 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 00:35:08.612510 kernel: kvm-clock: using sched offset of 6298442988 cycles Apr 24 00:35:08.612516 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 00:35:08.612521 kernel: tsc: Detected 2793.438 MHz processor Apr 24 00:35:08.612526 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 00:35:08.612531 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 00:35:08.612536 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 24 00:35:08.612542 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 24 00:35:08.612547 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 00:35:08.612553 kernel: Using GB pages for direct mapping Apr 24 00:35:08.612559 kernel: ACPI: Early table checksum verification disabled Apr 24 00:35:08.612564 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 24 00:35:08.612569 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 24 00:35:08.612575 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:35:08.612580 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:35:08.612585 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 24 00:35:08.612590 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:35:08.612595 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:35:08.612602 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:35:08.612607 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:35:08.612612 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 24 00:35:08.612617 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 24 00:35:08.612622 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 24 00:35:08.612627 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 24 00:35:08.612633 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 24 00:35:08.612638 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 24 00:35:08.612643 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 24 00:35:08.612649 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 24 00:35:08.612654 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 24 00:35:08.612659 kernel: No NUMA configuration found Apr 24 00:35:08.612664 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 24 00:35:08.612670 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Apr 24 00:35:08.612675 kernel: Zone ranges: Apr 24 00:35:08.612680 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 00:35:08.612685 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 24 00:35:08.612690 kernel: Normal empty Apr 24 00:35:08.612696 kernel: Device empty Apr 24 00:35:08.612702 kernel: Movable zone start for each node Apr 24 00:35:08.612707 kernel: Early memory node ranges Apr 24 00:35:08.612712 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 24 00:35:08.612717 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 24 00:35:08.612722 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 24 00:35:08.612727 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 24 00:35:08.612732 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 24 00:35:08.612737 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 24 00:35:08.612742 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Apr 24 00:35:08.612749 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Apr 24 00:35:08.612754 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 24 00:35:08.612759 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 00:35:08.612764 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 24 00:35:08.612769 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 24 00:35:08.612779 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 00:35:08.612786 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 24 00:35:08.612792 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 24 00:35:08.612797 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 24 00:35:08.612803 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 24 00:35:08.612809 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 24 00:35:08.612814 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 24 00:35:08.612821 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 00:35:08.612827 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 24 00:35:08.612832 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 24 00:35:08.612838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 00:35:08.612844 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 00:35:08.612851 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 00:35:08.612856 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 00:35:08.612862 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 00:35:08.612867 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 00:35:08.612873 kernel: TSC deadline timer available Apr 24 00:35:08.612878 kernel: CPU topo: Max. logical packages: 1 Apr 24 00:35:08.612884 kernel: CPU topo: Max. logical dies: 1 Apr 24 00:35:08.612890 kernel: CPU topo: Max. dies per package: 1 Apr 24 00:35:08.612895 kernel: CPU topo: Max. threads per core: 1 Apr 24 00:35:08.612902 kernel: CPU topo: Num. cores per package: 4 Apr 24 00:35:08.612907 kernel: CPU topo: Num. threads per package: 4 Apr 24 00:35:08.612913 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 24 00:35:08.612918 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 00:35:08.612924 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 24 00:35:08.612930 kernel: kvm-guest: setup PV sched yield Apr 24 00:35:08.612936 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 24 00:35:08.612941 kernel: Booting paravirtualized kernel on KVM Apr 24 00:35:08.612947 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 00:35:08.612954 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 24 00:35:08.612960 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 24 00:35:08.612965 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 24 00:35:08.612971 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 24 00:35:08.612976 kernel: kvm-guest: PV spinlocks enabled Apr 24 00:35:08.612982 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 00:35:08.612989 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 00:35:08.612995 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 00:35:08.613001 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 00:35:08.613008 kernel: Fallback order for Node 0: 0 Apr 24 00:35:08.613013 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Apr 24 00:35:08.613019 kernel: Policy zone: DMA32 Apr 24 00:35:08.613024 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 00:35:08.613030 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 24 00:35:08.613036 kernel: ftrace: allocating 40126 entries in 157 pages Apr 24 00:35:08.613041 kernel: ftrace: allocated 157 pages with 5 groups Apr 24 00:35:08.613047 kernel: Dynamic Preempt: voluntary Apr 24 00:35:08.613053 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 00:35:08.613060 kernel: rcu: RCU event tracing is enabled. Apr 24 00:35:08.613066 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 24 00:35:08.613072 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 00:35:08.613077 kernel: Rude variant of Tasks RCU enabled. Apr 24 00:35:08.613083 kernel: Tracing variant of Tasks RCU enabled. Apr 24 00:35:08.613089 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 00:35:08.613094 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 24 00:35:08.613100 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 00:35:08.613106 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 00:35:08.613113 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 00:35:08.613119 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 24 00:35:08.613124 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 00:35:08.613130 kernel: Console: colour dummy device 80x25 Apr 24 00:35:08.613135 kernel: printk: legacy console [ttyS0] enabled Apr 24 00:35:08.613141 kernel: ACPI: Core revision 20240827 Apr 24 00:35:08.613147 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 24 00:35:08.613152 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 00:35:08.613158 kernel: x2apic enabled Apr 24 00:35:08.613165 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 00:35:08.613170 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 24 00:35:08.613176 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 24 00:35:08.613182 kernel: kvm-guest: setup PV IPIs Apr 24 00:35:08.613187 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 24 00:35:08.613193 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 00:35:08.613199 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 24 00:35:08.613205 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 24 00:35:08.613210 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 24 00:35:08.613217 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 24 00:35:08.613223 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 00:35:08.613228 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 00:35:08.613234 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 00:35:08.613240 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 00:35:08.613245 kernel: RETBleed: Vulnerable Apr 24 00:35:08.613251 kernel: Speculative Store Bypass: Vulnerable Apr 24 00:35:08.613257 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 00:35:08.613264 kernel: GDS: Unknown: Dependent on hypervisor status Apr 24 00:35:08.613269 kernel: active return thunk: its_return_thunk Apr 24 00:35:08.613275 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 00:35:08.613280 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 00:35:08.613286 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 00:35:08.613291 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 00:35:08.613297 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 00:35:08.613303 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 00:35:08.613308 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 00:35:08.613315 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 00:35:08.613479 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 24 00:35:08.613484 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 24 00:35:08.613490 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 24 00:35:08.613496 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 24 00:35:08.613502 kernel: Freeing SMP alternatives memory: 32K Apr 24 00:35:08.613507 kernel: pid_max: default: 32768 minimum: 301 Apr 24 00:35:08.613513 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 24 00:35:08.613518 kernel: landlock: Up and running. Apr 24 00:35:08.613526 kernel: SELinux: Initializing. Apr 24 00:35:08.613531 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 00:35:08.613537 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 00:35:08.613543 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 24 00:35:08.613549 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 24 00:35:08.613554 kernel: signal: max sigframe size: 3632 Apr 24 00:35:08.613560 kernel: rcu: Hierarchical SRCU implementation. Apr 24 00:35:08.613566 kernel: rcu: Max phase no-delay instances is 400. Apr 24 00:35:08.613571 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 24 00:35:08.613579 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 00:35:08.613584 kernel: smp: Bringing up secondary CPUs ... Apr 24 00:35:08.613590 kernel: smpboot: x86: Booting SMP configuration: Apr 24 00:35:08.613595 kernel: .... node #0, CPUs: #1 #2 #3 Apr 24 00:35:08.613601 kernel: smp: Brought up 1 node, 4 CPUs Apr 24 00:35:08.613607 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 24 00:35:08.613613 kernel: Memory: 2374700K/2565800K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46224K init, 2524K bss, 185212K reserved, 0K cma-reserved) Apr 24 00:35:08.613618 kernel: devtmpfs: initialized Apr 24 00:35:08.613624 kernel: x86/mm: Memory block size: 128MB Apr 24 00:35:08.613631 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 24 00:35:08.613637 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 24 00:35:08.613642 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 24 00:35:08.613648 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 24 00:35:08.613654 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Apr 24 00:35:08.613659 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 24 00:35:08.613665 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 00:35:08.613670 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 24 00:35:08.613676 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 00:35:08.613683 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 00:35:08.613688 kernel: audit: initializing netlink subsys (disabled) Apr 24 00:35:08.613694 kernel: audit: type=2000 audit(1776990904.373:1): state=initialized audit_enabled=0 res=1 Apr 24 00:35:08.613699 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 00:35:08.613705 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 00:35:08.613711 kernel: cpuidle: using governor menu Apr 24 00:35:08.613716 kernel: efi: Freeing EFI boot services memory: 38812K Apr 24 00:35:08.613722 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 00:35:08.613728 kernel: dca service started, version 1.12.1 Apr 24 00:35:08.613735 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 24 00:35:08.613740 kernel: PCI: Using configuration type 1 for base access Apr 24 00:35:08.613746 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 00:35:08.613751 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 00:35:08.613757 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 00:35:08.613763 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 00:35:08.613768 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 00:35:08.613774 kernel: ACPI: Added _OSI(Module Device) Apr 24 00:35:08.613780 kernel: ACPI: Added _OSI(Processor Device) Apr 24 00:35:08.613786 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 00:35:08.613792 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 00:35:08.613798 kernel: ACPI: Interpreter enabled Apr 24 00:35:08.613803 kernel: ACPI: PM: (supports S0 S3 S5) Apr 24 00:35:08.613809 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 00:35:08.613815 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 00:35:08.613820 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 00:35:08.613826 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 24 00:35:08.613831 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 00:35:08.613942 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 00:35:08.613998 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 24 00:35:08.614050 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 24 00:35:08.614057 kernel: PCI host bridge to bus 0000:00 Apr 24 00:35:08.614112 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 00:35:08.614160 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 00:35:08.614208 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 00:35:08.614253 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 24 00:35:08.614299 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 24 00:35:08.614519 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 24 00:35:08.614569 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 00:35:08.614642 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 24 00:35:08.614702 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 24 00:35:08.614757 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 24 00:35:08.614809 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 24 00:35:08.614861 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 24 00:35:08.614912 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 00:35:08.614969 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 24 00:35:08.615024 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 24 00:35:08.615078 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 24 00:35:08.615130 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 24 00:35:08.615187 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 24 00:35:08.615240 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 24 00:35:08.615293 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 24 00:35:08.615515 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 24 00:35:08.615576 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 24 00:35:08.615631 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 24 00:35:08.615683 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 24 00:35:08.615734 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 24 00:35:08.615786 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 24 00:35:08.615842 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 24 00:35:08.615895 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 24 00:35:08.615946 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 12695 usecs Apr 24 00:35:08.616003 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 24 00:35:08.616055 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 24 00:35:08.616106 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 24 00:35:08.616161 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 24 00:35:08.616213 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 24 00:35:08.616220 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 00:35:08.616226 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 00:35:08.616233 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 00:35:08.616239 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 00:35:08.616244 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 24 00:35:08.616250 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 24 00:35:08.616256 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 24 00:35:08.616261 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 24 00:35:08.616267 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 24 00:35:08.616273 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 24 00:35:08.616278 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 24 00:35:08.616285 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 24 00:35:08.616290 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 24 00:35:08.616296 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 24 00:35:08.616302 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 24 00:35:08.616307 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 24 00:35:08.616313 kernel: iommu: Default domain type: Translated Apr 24 00:35:08.616482 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 00:35:08.616489 kernel: efivars: Registered efivars operations Apr 24 00:35:08.616494 kernel: PCI: Using ACPI for IRQ routing Apr 24 00:35:08.616501 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 00:35:08.616507 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 24 00:35:08.616513 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 24 00:35:08.616518 kernel: e820: reserve RAM buffer [mem 0x9b2e1018-0x9bffffff] Apr 24 00:35:08.616524 kernel: e820: reserve RAM buffer [mem 0x9b31e018-0x9bffffff] Apr 24 00:35:08.616529 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 24 00:35:08.616535 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 24 00:35:08.616541 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Apr 24 00:35:08.616546 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 24 00:35:08.616604 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 24 00:35:08.616656 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 24 00:35:08.616707 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 00:35:08.616714 kernel: vgaarb: loaded Apr 24 00:35:08.616720 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 24 00:35:08.616725 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 24 00:35:08.616731 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 00:35:08.616737 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 00:35:08.616744 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 00:35:08.616750 kernel: pnp: PnP ACPI init Apr 24 00:35:08.616810 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 24 00:35:08.616819 kernel: pnp: PnP ACPI: found 6 devices Apr 24 00:35:08.616834 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 00:35:08.616841 kernel: NET: Registered PF_INET protocol family Apr 24 00:35:08.616847 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 00:35:08.616853 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 00:35:08.616860 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 00:35:08.616866 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 00:35:08.616872 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 00:35:08.616878 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 00:35:08.616884 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 00:35:08.616890 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 00:35:08.616896 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 00:35:08.616902 kernel: NET: Registered PF_XDP protocol family Apr 24 00:35:08.616955 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 24 00:35:08.617010 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 24 00:35:08.617058 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 00:35:08.617105 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 00:35:08.617154 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 00:35:08.617201 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 24 00:35:08.617247 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 24 00:35:08.617293 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 24 00:35:08.617300 kernel: PCI: CLS 0 bytes, default 64 Apr 24 00:35:08.617308 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 00:35:08.617314 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 00:35:08.617488 kernel: Initialise system trusted keyrings Apr 24 00:35:08.617494 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 00:35:08.617500 kernel: Key type asymmetric registered Apr 24 00:35:08.617507 kernel: Asymmetric key parser 'x509' registered Apr 24 00:35:08.617513 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 24 00:35:08.617519 kernel: io scheduler mq-deadline registered Apr 24 00:35:08.617525 kernel: io scheduler kyber registered Apr 24 00:35:08.617530 kernel: io scheduler bfq registered Apr 24 00:35:08.617536 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 00:35:08.617543 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 24 00:35:08.617549 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 24 00:35:08.617554 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 24 00:35:08.617561 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 00:35:08.617567 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 00:35:08.617573 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 00:35:08.617580 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 00:35:08.617586 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 00:35:08.617645 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 24 00:35:08.617653 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 00:35:08.617700 kernel: rtc_cmos 00:04: registered as rtc0 Apr 24 00:35:08.617751 kernel: rtc_cmos 00:04: setting system clock to 2026-04-24T00:35:07 UTC (1776990907) Apr 24 00:35:08.617799 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 24 00:35:08.617806 kernel: intel_pstate: CPU model not supported Apr 24 00:35:08.617812 kernel: efifb: probing for efifb Apr 24 00:35:08.617818 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 24 00:35:08.617824 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 24 00:35:08.617830 kernel: efifb: scrolling: redraw Apr 24 00:35:08.617836 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 24 00:35:08.617841 kernel: Console: switching to colour frame buffer device 160x50 Apr 24 00:35:08.617849 kernel: fb0: EFI VGA frame buffer device Apr 24 00:35:08.617854 kernel: pstore: Using crash dump compression: deflate Apr 24 00:35:08.617860 kernel: pstore: Registered efi_pstore as persistent store backend Apr 24 00:35:08.617866 kernel: NET: Registered PF_INET6 protocol family Apr 24 00:35:08.617872 kernel: Segment Routing with IPv6 Apr 24 00:35:08.617878 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 00:35:08.617884 kernel: NET: Registered PF_PACKET protocol family Apr 24 00:35:08.617890 kernel: Key type dns_resolver registered Apr 24 00:35:08.617895 kernel: IPI shorthand broadcast: enabled Apr 24 00:35:08.617903 kernel: sched_clock: Marking stable (4822052931, 593142934)->(5662629312, -247433447) Apr 24 00:35:08.617908 kernel: registered taskstats version 1 Apr 24 00:35:08.617914 kernel: Loading compiled-in X.509 certificates Apr 24 00:35:08.617920 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 09f9b319c99eb3f54e68ef799fdb2bce5b238ec0' Apr 24 00:35:08.617926 kernel: Demotion targets for Node 0: null Apr 24 00:35:08.617932 kernel: Key type .fscrypt registered Apr 24 00:35:08.617937 kernel: Key type fscrypt-provisioning registered Apr 24 00:35:08.617943 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 00:35:08.617949 kernel: ima: Allocated hash algorithm: sha1 Apr 24 00:35:08.617955 kernel: ima: No architecture policies found Apr 24 00:35:08.617962 kernel: clk: Disabling unused clocks Apr 24 00:35:08.617968 kernel: Warning: unable to open an initial console. Apr 24 00:35:08.617973 kernel: Freeing unused kernel image (initmem) memory: 46224K Apr 24 00:35:08.617979 kernel: Write protecting the kernel read-only data: 40960k Apr 24 00:35:08.617985 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 24 00:35:08.617991 kernel: Run /init as init process Apr 24 00:35:08.617997 kernel: with arguments: Apr 24 00:35:08.618003 kernel: /init Apr 24 00:35:08.618008 kernel: with environment: Apr 24 00:35:08.618015 kernel: HOME=/ Apr 24 00:35:08.618021 kernel: TERM=linux Apr 24 00:35:08.618027 systemd[1]: Successfully made /usr/ read-only. Apr 24 00:35:08.618035 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 24 00:35:08.618042 systemd[1]: Detected virtualization kvm. Apr 24 00:35:08.618048 systemd[1]: Detected architecture x86-64. Apr 24 00:35:08.618054 systemd[1]: Running in initrd. Apr 24 00:35:08.618061 systemd[1]: No hostname configured, using default hostname. Apr 24 00:35:08.618068 systemd[1]: Hostname set to . Apr 24 00:35:08.618074 systemd[1]: Initializing machine ID from VM UUID. Apr 24 00:35:08.618080 systemd[1]: Queued start job for default target initrd.target. Apr 24 00:35:08.618086 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 00:35:08.618092 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 00:35:08.618099 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 00:35:08.618105 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 00:35:08.618112 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 00:35:08.618119 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 00:35:08.618127 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 00:35:08.618133 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 00:35:08.618139 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 00:35:08.618145 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 00:35:08.618152 systemd[1]: Reached target paths.target - Path Units. Apr 24 00:35:08.618159 systemd[1]: Reached target slices.target - Slice Units. Apr 24 00:35:08.618165 systemd[1]: Reached target swap.target - Swaps. Apr 24 00:35:08.618171 systemd[1]: Reached target timers.target - Timer Units. Apr 24 00:35:08.618177 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 00:35:08.618183 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 00:35:08.618189 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 00:35:08.618195 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 24 00:35:08.618202 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 00:35:08.618208 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 00:35:08.618215 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 00:35:08.618221 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 00:35:08.618228 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 00:35:08.618234 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 00:35:08.618240 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 00:35:08.618246 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 24 00:35:08.618252 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 00:35:08.618258 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 00:35:08.618265 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 00:35:08.618272 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:35:08.618278 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 00:35:08.618284 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 00:35:08.618291 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 00:35:08.618298 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 00:35:08.618304 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 00:35:08.618487 systemd-journald[204]: Collecting audit messages is disabled. Apr 24 00:35:08.618506 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 00:35:08.618513 systemd-journald[204]: Journal started Apr 24 00:35:08.618529 systemd-journald[204]: Runtime Journal (/run/log/journal/32297fa419484c75b4ee6ffa985ef4ee) is 6M, max 48.1M, 42.1M free. Apr 24 00:35:08.588104 systemd-modules-load[205]: Inserted module 'overlay' Apr 24 00:35:08.643715 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 00:35:08.655707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:35:08.664273 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 00:35:08.676680 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 00:35:08.725545 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 00:35:08.729652 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 00:35:08.731645 kernel: Bridge firewalling registered Apr 24 00:35:08.746562 systemd-modules-load[205]: Inserted module 'br_netfilter' Apr 24 00:35:08.747609 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 00:35:08.749221 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 00:35:08.771976 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 00:35:08.790686 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 00:35:08.796775 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 24 00:35:08.799116 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 00:35:08.831495 dracut-cmdline[240]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 00:35:08.842843 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:35:08.875493 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 00:35:08.925002 systemd-resolved[279]: Positive Trust Anchors: Apr 24 00:35:08.925086 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 00:35:08.925110 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 00:35:08.927204 systemd-resolved[279]: Defaulting to hostname 'linux'. Apr 24 00:35:08.928218 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 00:35:08.930747 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 00:35:09.044556 kernel: SCSI subsystem initialized Apr 24 00:35:09.058560 kernel: Loading iSCSI transport class v2.0-870. Apr 24 00:35:09.077504 kernel: iscsi: registered transport (tcp) Apr 24 00:35:09.108011 kernel: iscsi: registered transport (qla4xxx) Apr 24 00:35:09.108068 kernel: QLogic iSCSI HBA Driver Apr 24 00:35:09.149025 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 00:35:09.187246 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 00:35:09.206858 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 00:35:09.284101 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 00:35:09.299092 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 00:35:09.402590 kernel: raid6: avx512x4 gen() 39282 MB/s Apr 24 00:35:09.422599 kernel: raid6: avx512x2 gen() 37628 MB/s Apr 24 00:35:09.442596 kernel: raid6: avx512x1 gen() 38392 MB/s Apr 24 00:35:09.462605 kernel: raid6: avx2x4 gen() 31936 MB/s Apr 24 00:35:09.482606 kernel: raid6: avx2x2 gen() 32323 MB/s Apr 24 00:35:09.506692 kernel: raid6: avx2x1 gen() 24177 MB/s Apr 24 00:35:09.506731 kernel: raid6: using algorithm avx512x4 gen() 39282 MB/s Apr 24 00:35:09.531696 kernel: raid6: .... xor() 8381 MB/s, rmw enabled Apr 24 00:35:09.531743 kernel: raid6: using avx512x2 recovery algorithm Apr 24 00:35:09.557601 kernel: xor: automatically using best checksumming function avx Apr 24 00:35:09.787643 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 00:35:09.801027 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 00:35:09.810120 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 00:35:09.849989 systemd-udevd[454]: Using default interface naming scheme 'v255'. Apr 24 00:35:09.853911 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 00:35:09.859201 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 00:35:09.924810 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Apr 24 00:35:09.981578 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 00:35:09.990908 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 00:35:10.048866 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 00:35:10.059682 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 00:35:10.127049 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 00:35:10.127093 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 24 00:35:10.151959 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 24 00:35:10.171049 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 00:35:10.171090 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 24 00:35:10.171101 kernel: GPT:9289727 != 19775487 Apr 24 00:35:10.171110 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 00:35:10.171118 kernel: GPT:9289727 != 19775487 Apr 24 00:35:10.171126 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 00:35:10.171133 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 00:35:10.166309 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 00:35:10.166583 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:35:10.217263 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:35:10.218697 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:35:10.240616 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 24 00:35:10.280603 kernel: libata version 3.00 loaded. Apr 24 00:35:10.292648 kernel: AES CTR mode by8 optimization enabled Apr 24 00:35:10.297176 kernel: ahci 0000:00:1f.2: version 3.0 Apr 24 00:35:10.297689 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 24 00:35:10.316561 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 24 00:35:10.316690 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 24 00:35:10.316760 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 24 00:35:10.319310 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 24 00:35:10.361456 kernel: scsi host0: ahci Apr 24 00:35:10.365639 kernel: scsi host1: ahci Apr 24 00:35:10.365764 kernel: scsi host2: ahci Apr 24 00:35:10.370506 kernel: scsi host3: ahci Apr 24 00:35:10.372667 kernel: scsi host4: ahci Apr 24 00:35:10.372780 kernel: scsi host5: ahci Apr 24 00:35:10.372849 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Apr 24 00:35:10.372857 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Apr 24 00:35:10.372865 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Apr 24 00:35:10.373670 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Apr 24 00:35:10.373688 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Apr 24 00:35:10.373695 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Apr 24 00:35:10.389553 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 24 00:35:10.430146 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 24 00:35:10.432945 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 24 00:35:10.433187 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:35:10.447192 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 00:35:10.522252 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 00:35:10.456050 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 00:35:10.531277 disk-uuid[645]: Primary Header is updated. Apr 24 00:35:10.531277 disk-uuid[645]: Secondary Entries is updated. Apr 24 00:35:10.531277 disk-uuid[645]: Secondary Header is updated. Apr 24 00:35:10.689483 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 24 00:35:10.689543 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 24 00:35:10.699474 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 24 00:35:10.705674 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 24 00:35:10.712624 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 24 00:35:10.721768 kernel: ata3.00: LPM support broken, forcing max_power Apr 24 00:35:10.721804 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 24 00:35:10.721815 kernel: ata3.00: applying bridge limits Apr 24 00:35:10.731587 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 24 00:35:10.740377 kernel: ata3.00: LPM support broken, forcing max_power Apr 24 00:35:10.740393 kernel: ata3.00: configured for UDMA/100 Apr 24 00:35:10.749663 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 24 00:35:10.810139 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 24 00:35:10.810601 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 00:35:10.830520 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 24 00:35:11.175199 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 00:35:11.176042 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 00:35:11.190283 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 00:35:11.222503 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 00:35:11.230886 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 00:35:11.287953 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 00:35:11.563489 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 00:35:11.564158 disk-uuid[646]: The operation has completed successfully. Apr 24 00:35:11.596205 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 00:35:11.602802 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 00:35:11.648757 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 00:35:11.678034 sh[673]: Success Apr 24 00:35:11.718227 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 00:35:11.718293 kernel: device-mapper: uevent: version 1.0.3 Apr 24 00:35:11.718707 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 24 00:35:11.751861 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 24 00:35:11.796279 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 00:35:11.812274 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 00:35:11.837241 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 00:35:11.854208 kernel: BTRFS: device fsid b0afcb9a-4dc6-42cc-b61f-b370046a03ca devid 1 transid 32 /dev/mapper/usr (253:0) scanned by mount (686) Apr 24 00:35:11.878877 kernel: BTRFS info (device dm-0): first mount of filesystem b0afcb9a-4dc6-42cc-b61f-b370046a03ca Apr 24 00:35:11.878919 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:35:11.904251 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 24 00:35:11.904296 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 24 00:35:11.906844 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 00:35:11.907687 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 24 00:35:11.918891 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 00:35:11.919999 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 00:35:11.965136 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 00:35:12.013548 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (715) Apr 24 00:35:12.021588 kernel: BTRFS info (device vda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:35:12.028522 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:35:12.047249 kernel: BTRFS info (device vda6): turning on async discard Apr 24 00:35:12.047275 kernel: BTRFS info (device vda6): enabling free space tree Apr 24 00:35:12.063620 kernel: BTRFS info (device vda6): last unmount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:35:12.068626 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 00:35:12.077187 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 00:35:12.207247 ignition[773]: Ignition 2.22.0 Apr 24 00:35:12.207573 ignition[773]: Stage: fetch-offline Apr 24 00:35:12.207594 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:35:12.207600 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 00:35:12.207664 ignition[773]: parsed url from cmdline: "" Apr 24 00:35:12.207666 ignition[773]: no config URL provided Apr 24 00:35:12.207670 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 00:35:12.207675 ignition[773]: no config at "/usr/lib/ignition/user.ign" Apr 24 00:35:12.207692 ignition[773]: op(1): [started] loading QEMU firmware config module Apr 24 00:35:12.207696 ignition[773]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 24 00:35:12.260192 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 00:35:12.272584 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 00:35:12.278151 ignition[773]: op(1): [finished] loading QEMU firmware config module Apr 24 00:35:12.337616 systemd-networkd[863]: lo: Link UP Apr 24 00:35:12.337684 systemd-networkd[863]: lo: Gained carrier Apr 24 00:35:12.338649 systemd-networkd[863]: Enumeration completed Apr 24 00:35:12.338970 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 00:35:12.340623 systemd-networkd[863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:35:12.340627 systemd-networkd[863]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 00:35:12.342471 systemd-networkd[863]: eth0: Link UP Apr 24 00:35:12.342739 systemd-networkd[863]: eth0: Gained carrier Apr 24 00:35:12.342746 systemd-networkd[863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:35:12.361743 systemd[1]: Reached target network.target - Network. Apr 24 00:35:12.429546 systemd-networkd[863]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 00:35:13.029775 ignition[773]: parsing config with SHA512: 39ea369ad01cf0b2d74c5ed9fb883c6eae41d64566c972c31a7d089879dcbbee5c42b504c43b933fae74768ca98a04a0f1ac3371445caac63d7c5a4eb18a2912 Apr 24 00:35:13.048937 unknown[773]: fetched base config from "system" Apr 24 00:35:13.049000 unknown[773]: fetched user config from "qemu" Apr 24 00:35:13.060906 ignition[773]: fetch-offline: fetch-offline passed Apr 24 00:35:13.066893 ignition[773]: Ignition finished successfully Apr 24 00:35:13.074597 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 00:35:13.090880 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 24 00:35:13.104835 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 00:35:13.160170 ignition[868]: Ignition 2.22.0 Apr 24 00:35:13.160238 ignition[868]: Stage: kargs Apr 24 00:35:13.160513 ignition[868]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:35:13.160520 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 00:35:13.161023 ignition[868]: kargs: kargs passed Apr 24 00:35:13.161051 ignition[868]: Ignition finished successfully Apr 24 00:35:13.196056 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 00:35:13.209971 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 00:35:13.270868 ignition[876]: Ignition 2.22.0 Apr 24 00:35:13.270945 ignition[876]: Stage: disks Apr 24 00:35:13.271123 ignition[876]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:35:13.271129 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 00:35:13.271833 ignition[876]: disks: disks passed Apr 24 00:35:13.271862 ignition[876]: Ignition finished successfully Apr 24 00:35:13.293895 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 00:35:13.297855 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 00:35:13.317663 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 00:35:13.332210 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 00:35:13.346271 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 00:35:13.361142 systemd[1]: Reached target basic.target - Basic System. Apr 24 00:35:13.387602 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 00:35:13.438585 systemd-fsck[886]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 24 00:35:13.447804 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 00:35:13.471000 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 00:35:13.723658 kernel: EXT4-fs (vda9): mounted filesystem 8c3ace63-1728-4b5e-a7b6-4ef650e41ba1 r/w with ordered data mode. Quota mode: none. Apr 24 00:35:13.724201 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 00:35:13.725053 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 00:35:13.736835 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 00:35:13.762289 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 00:35:13.762786 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 00:35:13.762820 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 00:35:13.762839 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 00:35:13.828599 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 00:35:13.869997 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (894) Apr 24 00:35:13.870018 kernel: BTRFS info (device vda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:35:13.870032 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:35:13.870039 kernel: BTRFS info (device vda6): turning on async discard Apr 24 00:35:13.870046 kernel: BTRFS info (device vda6): enabling free space tree Apr 24 00:35:13.836786 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 00:35:13.864732 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 00:35:13.951991 initrd-setup-root[918]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 00:35:13.972712 initrd-setup-root[925]: cut: /sysroot/etc/group: No such file or directory Apr 24 00:35:13.990898 initrd-setup-root[932]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 00:35:14.010918 initrd-setup-root[939]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 00:35:14.083939 systemd-networkd[863]: eth0: Gained IPv6LL Apr 24 00:35:14.228084 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 00:35:14.237044 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 00:35:14.258171 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 00:35:14.271909 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 00:35:14.287718 kernel: BTRFS info (device vda6): last unmount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:35:14.327825 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 00:35:14.372470 ignition[1007]: INFO : Ignition 2.22.0 Apr 24 00:35:14.372470 ignition[1007]: INFO : Stage: mount Apr 24 00:35:14.372470 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 00:35:14.389686 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 00:35:14.389686 ignition[1007]: INFO : mount: mount passed Apr 24 00:35:14.389686 ignition[1007]: INFO : Ignition finished successfully Apr 24 00:35:14.410917 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 00:35:14.419208 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 00:35:14.726774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 00:35:14.765613 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1020) Apr 24 00:35:14.765653 kernel: BTRFS info (device vda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:35:14.778272 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:35:14.796113 kernel: BTRFS info (device vda6): turning on async discard Apr 24 00:35:14.796149 kernel: BTRFS info (device vda6): enabling free space tree Apr 24 00:35:14.799586 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 00:35:14.851519 ignition[1036]: INFO : Ignition 2.22.0 Apr 24 00:35:14.851519 ignition[1036]: INFO : Stage: files Apr 24 00:35:14.863746 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 00:35:14.863746 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 00:35:14.863746 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Apr 24 00:35:14.863746 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 00:35:14.863746 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 00:35:14.863746 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 00:35:14.863746 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 00:35:14.863746 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 00:35:14.863746 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 00:35:14.863746 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 00:35:14.860668 unknown[1036]: wrote ssh authorized keys file for user: core Apr 24 00:35:14.975969 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 00:35:15.019958 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 00:35:15.019958 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 00:35:15.049714 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 00:35:15.049714 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 00:35:15.049714 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 00:35:15.049714 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 00:35:15.049714 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 00:35:15.049714 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 00:35:15.049714 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 00:35:15.049714 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 00:35:15.049714 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 00:35:15.049714 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 00:35:15.049714 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 00:35:15.049714 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 00:35:15.049714 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 24 00:35:15.290313 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 00:35:15.629992 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 00:35:15.629992 ignition[1036]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 00:35:15.655160 ignition[1036]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 00:35:15.655160 ignition[1036]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 00:35:15.655160 ignition[1036]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 00:35:15.655160 ignition[1036]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 24 00:35:15.655160 ignition[1036]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 00:35:15.655160 ignition[1036]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 00:35:15.655160 ignition[1036]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 24 00:35:15.655160 ignition[1036]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 24 00:35:15.751089 ignition[1036]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 00:35:15.751089 ignition[1036]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 00:35:15.751089 ignition[1036]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 24 00:35:15.751089 ignition[1036]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 24 00:35:15.751089 ignition[1036]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 00:35:15.751089 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 00:35:15.751089 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 00:35:15.751089 ignition[1036]: INFO : files: files passed Apr 24 00:35:15.751089 ignition[1036]: INFO : Ignition finished successfully Apr 24 00:35:15.764174 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 00:35:15.777552 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 00:35:15.864013 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 00:35:15.864628 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 00:35:15.864750 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 00:35:15.928744 initrd-setup-root-after-ignition[1066]: grep: /sysroot/oem/oem-release: No such file or directory Apr 24 00:35:15.944775 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 00:35:15.955670 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 00:35:15.955670 initrd-setup-root-after-ignition[1068]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 00:35:15.979918 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 00:35:15.989526 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 00:35:15.998831 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 00:35:16.087796 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 00:35:16.087996 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 00:35:16.104818 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 00:35:16.113928 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 00:35:16.135653 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 00:35:16.154111 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 00:35:16.206905 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 00:35:16.217207 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 00:35:16.263137 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 00:35:16.263651 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 00:35:16.279194 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 00:35:16.307918 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 00:35:16.308216 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 00:35:16.331233 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 00:35:16.331672 systemd[1]: Stopped target basic.target - Basic System. Apr 24 00:35:16.345587 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 00:35:16.357884 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 00:35:16.372896 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 00:35:16.403888 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 24 00:35:16.412139 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 00:35:16.412621 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 00:35:16.450840 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 00:35:16.458628 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 00:35:16.465868 systemd[1]: Stopped target swap.target - Swaps. Apr 24 00:35:16.491209 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 00:35:16.491620 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 00:35:16.506104 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 00:35:16.528544 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 00:35:16.537498 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 00:35:16.554001 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 00:35:16.554298 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 00:35:16.554624 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 00:35:16.586841 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 00:35:16.587011 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 00:35:16.594585 systemd[1]: Stopped target paths.target - Path Units. Apr 24 00:35:16.609981 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 00:35:16.613875 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 00:35:16.624273 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 00:35:16.652724 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 00:35:16.665719 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 00:35:16.665907 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 00:35:16.681921 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 00:35:16.682089 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 00:35:16.688287 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 00:35:16.688652 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 00:35:16.701268 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 00:35:16.701612 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 00:35:16.717821 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 00:35:16.787654 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 00:35:16.794044 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 00:35:16.794164 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 00:35:16.803523 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 00:35:16.803611 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 00:35:16.848986 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 00:35:16.858961 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 00:35:16.876978 ignition[1092]: INFO : Ignition 2.22.0 Apr 24 00:35:16.876978 ignition[1092]: INFO : Stage: umount Apr 24 00:35:16.876978 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 00:35:16.876978 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 00:35:16.876978 ignition[1092]: INFO : umount: umount passed Apr 24 00:35:16.876978 ignition[1092]: INFO : Ignition finished successfully Apr 24 00:35:16.877314 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 00:35:16.877675 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 00:35:16.882668 systemd[1]: Stopped target network.target - Network. Apr 24 00:35:16.894518 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 00:35:16.894564 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 00:35:16.914842 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 00:35:16.914887 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 00:35:16.923687 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 00:35:16.923725 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 00:35:16.936074 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 00:35:16.936106 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 00:35:16.957138 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 00:35:16.964262 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 00:35:16.979128 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 00:35:17.072736 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 00:35:17.072894 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 00:35:17.073947 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 00:35:17.073986 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 00:35:17.102813 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 00:35:17.102992 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 00:35:17.140869 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 24 00:35:17.141087 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 00:35:17.141686 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 00:35:17.173150 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 24 00:35:17.174109 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 24 00:35:17.183782 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 00:35:17.183816 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 00:35:17.199857 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 00:35:17.213695 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 00:35:17.213741 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 00:35:17.233176 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 00:35:17.233210 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:35:17.271205 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 00:35:17.271240 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 00:35:17.278663 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 00:35:17.278691 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 00:35:17.326725 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 00:35:17.327847 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 24 00:35:17.327890 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 24 00:35:17.369237 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 00:35:17.369662 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 00:35:17.391633 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 00:35:17.391903 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 00:35:17.398818 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 00:35:17.398849 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 00:35:17.416248 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 00:35:17.416272 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 00:35:17.430977 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 00:35:17.431021 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 00:35:17.459983 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 00:35:17.460023 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 00:35:17.481726 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 00:35:17.481777 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 00:35:17.531207 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 00:35:17.547730 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 24 00:35:17.547854 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 00:35:17.564133 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 00:35:17.564187 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 00:35:17.600590 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 00:35:17.600708 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 00:35:17.610094 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 00:35:17.610128 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 00:35:17.618036 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 00:35:17.618071 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:35:17.669120 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 24 00:35:17.669169 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Apr 24 00:35:17.669192 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 24 00:35:17.669214 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 24 00:35:17.677014 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 00:35:17.677618 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 00:35:17.685836 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 00:35:17.733120 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 00:35:17.784828 systemd[1]: Switching root. Apr 24 00:35:17.826218 systemd-journald[204]: Journal stopped Apr 24 00:35:19.706055 systemd-journald[204]: Received SIGTERM from PID 1 (systemd). Apr 24 00:35:19.706099 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 00:35:19.706109 kernel: SELinux: policy capability open_perms=1 Apr 24 00:35:19.706117 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 00:35:19.706124 kernel: SELinux: policy capability always_check_network=0 Apr 24 00:35:19.706135 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 00:35:19.706143 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 00:35:19.706154 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 00:35:19.706164 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 00:35:19.706171 kernel: SELinux: policy capability userspace_initial_context=0 Apr 24 00:35:19.706179 kernel: audit: type=1403 audit(1776990918.001:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 00:35:19.706188 systemd[1]: Successfully loaded SELinux policy in 84.210ms. Apr 24 00:35:19.706199 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.026ms. Apr 24 00:35:19.706207 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 24 00:35:19.706215 systemd[1]: Detected virtualization kvm. Apr 24 00:35:19.706223 systemd[1]: Detected architecture x86-64. Apr 24 00:35:19.706233 systemd[1]: Detected first boot. Apr 24 00:35:19.706241 systemd[1]: Initializing machine ID from VM UUID. Apr 24 00:35:19.706249 zram_generator::config[1136]: No configuration found. Apr 24 00:35:19.706257 kernel: Guest personality initialized and is inactive Apr 24 00:35:19.706265 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 24 00:35:19.706272 kernel: Initialized host personality Apr 24 00:35:19.706279 kernel: NET: Registered PF_VSOCK protocol family Apr 24 00:35:19.706288 systemd[1]: Populated /etc with preset unit settings. Apr 24 00:35:19.706296 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 24 00:35:19.706305 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 00:35:19.706314 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 00:35:19.706512 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 00:35:19.706521 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 00:35:19.706530 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 00:35:19.706538 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 00:35:19.706546 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 00:35:19.706553 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 00:35:19.706563 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 00:35:19.706572 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 00:35:19.706579 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 00:35:19.706587 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 00:35:19.706596 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 00:35:19.706604 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 00:35:19.706616 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 00:35:19.706628 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 00:35:19.706643 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 00:35:19.706663 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 00:35:19.706671 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 00:35:19.706678 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 00:35:19.706686 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 00:35:19.706693 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 00:35:19.706701 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 00:35:19.706708 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 00:35:19.706716 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 00:35:19.706725 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 00:35:19.706733 systemd[1]: Reached target slices.target - Slice Units. Apr 24 00:35:19.706741 systemd[1]: Reached target swap.target - Swaps. Apr 24 00:35:19.706751 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 00:35:19.706759 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 00:35:19.706767 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 24 00:35:19.706775 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 00:35:19.706783 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 00:35:19.706791 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 00:35:19.706801 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 00:35:19.706809 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 00:35:19.706816 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 00:35:19.706824 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 00:35:19.706832 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:35:19.706839 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 00:35:19.706847 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 00:35:19.706855 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 00:35:19.706864 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 00:35:19.706872 systemd[1]: Reached target machines.target - Containers. Apr 24 00:35:19.706880 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 00:35:19.706888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 00:35:19.706895 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 00:35:19.706903 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 00:35:19.706910 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 00:35:19.706918 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 00:35:19.706925 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 00:35:19.706934 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 00:35:19.706942 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 00:35:19.706949 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 00:35:19.706958 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 00:35:19.706966 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 00:35:19.706974 kernel: loop: module loaded Apr 24 00:35:19.706981 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 00:35:19.706988 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 00:35:19.706996 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 00:35:19.707005 kernel: fuse: init (API version 7.41) Apr 24 00:35:19.707013 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 00:35:19.707020 kernel: ACPI: bus type drm_connector registered Apr 24 00:35:19.707027 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 00:35:19.707035 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 00:35:19.707058 systemd-journald[1221]: Collecting audit messages is disabled. Apr 24 00:35:19.707075 systemd-journald[1221]: Journal started Apr 24 00:35:19.707093 systemd-journald[1221]: Runtime Journal (/run/log/journal/32297fa419484c75b4ee6ffa985ef4ee) is 6M, max 48.1M, 42.1M free. Apr 24 00:35:18.664093 systemd[1]: Queued start job for default target multi-user.target. Apr 24 00:35:18.676856 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 24 00:35:18.677774 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 00:35:18.678094 systemd[1]: systemd-journald.service: Consumed 2.069s CPU time. Apr 24 00:35:19.730594 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 00:35:19.755664 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 24 00:35:19.763728 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 00:35:19.787619 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 00:35:19.787674 systemd[1]: Stopped verity-setup.service. Apr 24 00:35:19.787684 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:35:19.809803 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 00:35:19.817026 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 00:35:19.824235 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 00:35:19.832110 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 00:35:19.838822 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 00:35:19.846712 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 00:35:19.854648 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 00:35:19.861635 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 00:35:19.870026 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 00:35:19.878928 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 00:35:19.879195 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 00:35:19.887878 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 00:35:19.888134 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 00:35:19.896291 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 00:35:19.896797 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 00:35:19.904702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 00:35:19.904949 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 00:35:19.913603 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 00:35:19.913837 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 00:35:19.921677 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 00:35:19.921916 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 00:35:19.929945 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 00:35:19.938093 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 00:35:19.947012 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 00:35:19.956158 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 24 00:35:19.965231 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 00:35:19.982877 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 00:35:19.991835 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 00:35:20.013878 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 00:35:20.021697 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 00:35:20.021790 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 00:35:20.030532 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 24 00:35:20.041157 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 00:35:20.047976 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 00:35:20.049944 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 00:35:20.057977 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 00:35:20.065684 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 00:35:20.067750 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 00:35:20.072600 systemd-journald[1221]: Time spent on flushing to /var/log/journal/32297fa419484c75b4ee6ffa985ef4ee is 13.992ms for 1069 entries. Apr 24 00:35:20.072600 systemd-journald[1221]: System Journal (/var/log/journal/32297fa419484c75b4ee6ffa985ef4ee) is 8M, max 195.6M, 187.6M free. Apr 24 00:35:20.097118 systemd-journald[1221]: Received client request to flush runtime journal. Apr 24 00:35:20.082546 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 00:35:20.089172 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 00:35:20.098865 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 00:35:20.112701 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 00:35:20.123532 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 00:35:20.138529 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 00:35:20.155172 kernel: loop0: detected capacity change from 0 to 128560 Apr 24 00:35:20.151685 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 00:35:20.162038 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 00:35:20.171696 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:35:20.183960 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Apr 24 00:35:20.185533 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 00:35:20.185685 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Apr 24 00:35:20.189598 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 00:35:20.197932 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 24 00:35:20.206568 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 00:35:20.218665 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 00:35:20.241679 kernel: loop1: detected capacity change from 0 to 217752 Apr 24 00:35:20.264913 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 00:35:20.265777 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 24 00:35:20.300224 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 00:35:20.305578 kernel: loop2: detected capacity change from 0 to 110984 Apr 24 00:35:20.317898 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 00:35:20.347049 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Apr 24 00:35:20.347061 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Apr 24 00:35:20.348523 kernel: loop3: detected capacity change from 0 to 128560 Apr 24 00:35:20.350157 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 00:35:20.378646 kernel: loop4: detected capacity change from 0 to 217752 Apr 24 00:35:20.410496 kernel: loop5: detected capacity change from 0 to 110984 Apr 24 00:35:20.434216 (sd-merge)[1282]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 24 00:35:20.434661 (sd-merge)[1282]: Merged extensions into '/usr'. Apr 24 00:35:20.439871 systemd[1]: Reload requested from client PID 1257 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 00:35:20.439961 systemd[1]: Reloading... Apr 24 00:35:20.530488 zram_generator::config[1309]: No configuration found. Apr 24 00:35:20.652842 ldconfig[1251]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 00:35:20.696206 systemd[1]: Reloading finished in 255 ms. Apr 24 00:35:20.715947 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 00:35:20.725761 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 00:35:20.736944 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 00:35:20.773190 systemd[1]: Starting ensure-sysext.service... Apr 24 00:35:20.779919 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 00:35:20.790779 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 00:35:20.823772 systemd[1]: Reload requested from client PID 1347 ('systemctl') (unit ensure-sysext.service)... Apr 24 00:35:20.823855 systemd[1]: Reloading... Apr 24 00:35:20.833705 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 24 00:35:20.833964 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 24 00:35:20.834647 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 00:35:20.834855 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 00:35:20.835629 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 00:35:20.835848 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Apr 24 00:35:20.835915 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Apr 24 00:35:20.838936 systemd-tmpfiles[1348]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 00:35:20.838999 systemd-tmpfiles[1348]: Skipping /boot Apr 24 00:35:20.845098 systemd-tmpfiles[1348]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 00:35:20.845159 systemd-tmpfiles[1348]: Skipping /boot Apr 24 00:35:20.849912 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Apr 24 00:35:20.889503 zram_generator::config[1373]: No configuration found. Apr 24 00:35:21.054637 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 24 00:35:21.054719 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 00:35:21.065516 kernel: ACPI: button: Power Button [PWRF] Apr 24 00:35:21.094690 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 24 00:35:21.104628 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 24 00:35:21.117588 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 24 00:35:21.153054 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 00:35:21.162251 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 00:35:21.162313 systemd[1]: Reloading finished in 338 ms. Apr 24 00:35:21.171912 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 00:35:21.186802 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 00:35:21.237096 systemd[1]: Finished ensure-sysext.service. Apr 24 00:35:21.445916 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:35:21.449560 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 24 00:35:21.468928 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 00:35:21.476991 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 00:35:21.480135 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 00:35:21.493954 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 00:35:21.502949 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 00:35:21.514201 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 00:35:21.522037 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 00:35:21.525235 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 00:35:21.534104 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 00:35:21.535852 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 00:35:21.554252 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 00:35:21.569979 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 00:35:21.588961 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 24 00:35:21.600426 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 00:35:21.646910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:35:21.656147 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:35:21.661133 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 00:35:21.666977 augenrules[1500]: No rules Apr 24 00:35:21.673606 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 00:35:21.711925 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 24 00:35:21.731044 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 00:35:21.731497 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 00:35:21.741050 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 00:35:21.741297 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 00:35:21.742058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 00:35:21.742231 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 00:35:21.749811 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 00:35:21.749998 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 00:35:21.757724 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 00:35:21.766797 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 00:35:21.808230 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 00:35:21.808848 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 00:35:21.813829 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 00:35:21.944624 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 00:35:21.952056 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 00:35:21.954078 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 00:35:21.965602 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:35:21.975201 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 00:35:22.017879 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 00:35:22.094137 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 24 00:35:22.098084 systemd-resolved[1484]: Positive Trust Anchors: Apr 24 00:35:22.098185 systemd-resolved[1484]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 00:35:22.098211 systemd-resolved[1484]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 00:35:22.100241 systemd-networkd[1483]: lo: Link UP Apr 24 00:35:22.100629 systemd-networkd[1483]: lo: Gained carrier Apr 24 00:35:22.101627 systemd-networkd[1483]: Enumeration completed Apr 24 00:35:22.101960 systemd-resolved[1484]: Defaulting to hostname 'linux'. Apr 24 00:35:22.103126 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:35:22.103128 systemd-networkd[1483]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 00:35:22.104968 systemd-networkd[1483]: eth0: Link UP Apr 24 00:35:22.105180 systemd-networkd[1483]: eth0: Gained carrier Apr 24 00:35:22.105203 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 00:35:22.105249 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:35:22.114961 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 00:35:22.124013 systemd[1]: Reached target network.target - Network. Apr 24 00:35:22.131617 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 00:35:22.141855 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 00:35:22.150970 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 00:35:22.162307 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 00:35:22.172966 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 24 00:35:22.182596 systemd-networkd[1483]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 00:35:22.182839 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 00:35:22.183199 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Apr 24 00:35:24.355239 systemd-resolved[1484]: Clock change detected. Flushing caches. Apr 24 00:35:24.355262 systemd-timesyncd[1487]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 24 00:35:24.355333 systemd-timesyncd[1487]: Initial clock synchronization to Fri 2026-04-24 00:35:24.355145 UTC. Apr 24 00:35:24.364356 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 00:35:24.364457 systemd[1]: Reached target paths.target - Path Units. Apr 24 00:35:24.372595 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 00:35:24.381787 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 00:35:24.391410 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 00:35:24.401127 systemd[1]: Reached target timers.target - Timer Units. Apr 24 00:35:24.410329 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 00:35:24.420664 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 00:35:24.430118 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 24 00:35:24.440231 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 24 00:35:24.449477 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 24 00:35:24.460206 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 00:35:24.468391 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 24 00:35:24.478799 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 24 00:35:24.488691 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 00:35:24.497835 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 00:35:24.507266 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 00:35:24.514539 systemd[1]: Reached target basic.target - Basic System. Apr 24 00:35:24.521506 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 00:35:24.521587 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 00:35:24.526448 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 00:35:24.537162 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 00:35:24.544799 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 00:35:24.554669 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 00:35:24.572758 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 00:35:24.578173 jq[1539]: false Apr 24 00:35:24.581194 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 00:35:24.582486 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 24 00:35:24.591523 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 00:35:24.601220 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 00:35:24.611280 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 00:35:24.611472 extend-filesystems[1540]: Found /dev/vda6 Apr 24 00:35:24.629145 extend-filesystems[1540]: Found /dev/vda9 Apr 24 00:35:24.629145 extend-filesystems[1540]: Checking size of /dev/vda9 Apr 24 00:35:24.621491 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 00:35:24.633086 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 00:35:24.633673 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 00:35:24.634198 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 00:35:24.636278 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 00:35:24.651361 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 00:35:24.655703 extend-filesystems[1540]: Resized partition /dev/vda9 Apr 24 00:35:24.699235 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 24 00:35:24.663809 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 24 00:35:24.699315 extend-filesystems[1564]: resize2fs 1.47.3 (8-Jul-2025) Apr 24 00:35:24.672196 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 00:35:24.694326 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 00:35:24.694548 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 00:35:24.721354 jq[1562]: true Apr 24 00:35:24.694723 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 00:35:24.695182 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 00:35:24.696816 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 00:35:24.697138 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 00:35:24.728731 jq[1579]: true Apr 24 00:35:24.732798 tar[1569]: linux-amd64/LICENSE Apr 24 00:35:24.732798 tar[1569]: linux-amd64/helm Apr 24 00:35:24.749152 dbus-daemon[1537]: [system] SELinux support is enabled Apr 24 00:35:24.757614 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing passwd entry cache Apr 24 00:35:24.749330 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 00:35:24.757744 update_engine[1559]: I20260424 00:35:24.752389 1559 main.cc:92] Flatcar Update Engine starting Apr 24 00:35:24.753630 oslogin_cache_refresh[1541]: Refreshing passwd entry cache Apr 24 00:35:24.760589 (ntainerd)[1570]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 00:35:24.761528 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 00:35:24.761548 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 00:35:24.775697 oslogin_cache_refresh[1541]: Failure getting users, quitting Apr 24 00:35:24.782446 update_engine[1559]: I20260424 00:35:24.767176 1559 update_check_scheduler.cc:74] Next update check in 4m29s Apr 24 00:35:24.782470 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting users, quitting Apr 24 00:35:24.782470 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 24 00:35:24.782470 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing group entry cache Apr 24 00:35:24.771571 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 00:35:24.775710 oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 24 00:35:24.771585 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 00:35:24.775745 oslogin_cache_refresh[1541]: Refreshing group entry cache Apr 24 00:35:24.784403 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting groups, quitting Apr 24 00:35:24.784403 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 24 00:35:24.783665 oslogin_cache_refresh[1541]: Failure getting groups, quitting Apr 24 00:35:24.783677 oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 24 00:35:24.788755 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 24 00:35:24.789444 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 24 00:35:24.806208 systemd[1]: Started update-engine.service - Update Engine. Apr 24 00:35:24.816159 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 00:35:24.847206 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 24 00:35:24.875357 systemd-logind[1558]: Watching system buttons on /dev/input/event2 (Power Button) Apr 24 00:35:24.875374 systemd-logind[1558]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 00:35:24.877679 systemd-logind[1558]: New seat seat0. Apr 24 00:35:24.879075 extend-filesystems[1564]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 24 00:35:24.879075 extend-filesystems[1564]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 24 00:35:24.879075 extend-filesystems[1564]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 24 00:35:24.881434 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 00:35:24.916819 bash[1598]: Updated "/home/core/.ssh/authorized_keys" Apr 24 00:35:24.917086 extend-filesystems[1540]: Resized filesystem in /dev/vda9 Apr 24 00:35:24.882151 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 00:35:24.907405 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 00:35:24.924724 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 00:35:24.935815 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 24 00:35:24.942813 locksmithd[1600]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 00:35:25.023812 containerd[1570]: time="2026-04-24T00:35:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 24 00:35:25.025289 containerd[1570]: time="2026-04-24T00:35:25.024764648Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 24 00:35:25.036739 containerd[1570]: time="2026-04-24T00:35:25.036714600Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.47µs" Apr 24 00:35:25.037970 containerd[1570]: time="2026-04-24T00:35:25.036801273Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 24 00:35:25.037970 containerd[1570]: time="2026-04-24T00:35:25.036817617Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 24 00:35:25.037970 containerd[1570]: time="2026-04-24T00:35:25.037107144Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 24 00:35:25.037970 containerd[1570]: time="2026-04-24T00:35:25.037119610Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 24 00:35:25.037970 containerd[1570]: time="2026-04-24T00:35:25.037136416Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 24 00:35:25.037970 containerd[1570]: time="2026-04-24T00:35:25.037168115Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 24 00:35:25.037970 containerd[1570]: time="2026-04-24T00:35:25.037175678Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 24 00:35:25.037970 containerd[1570]: time="2026-04-24T00:35:25.037323932Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 24 00:35:25.037970 containerd[1570]: time="2026-04-24T00:35:25.037333618Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 24 00:35:25.037970 containerd[1570]: time="2026-04-24T00:35:25.037340466Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 24 00:35:25.037970 containerd[1570]: time="2026-04-24T00:35:25.037345508Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 24 00:35:25.037970 containerd[1570]: time="2026-04-24T00:35:25.037396638Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 24 00:35:25.038208 containerd[1570]: time="2026-04-24T00:35:25.037520349Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 24 00:35:25.038208 containerd[1570]: time="2026-04-24T00:35:25.037536466Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 24 00:35:25.038208 containerd[1570]: time="2026-04-24T00:35:25.037542595Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 24 00:35:25.038208 containerd[1570]: time="2026-04-24T00:35:25.037646323Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 24 00:35:25.038390 containerd[1570]: time="2026-04-24T00:35:25.038377515Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 24 00:35:25.038459 containerd[1570]: time="2026-04-24T00:35:25.038451125Z" level=info msg="metadata content store policy set" policy=shared Apr 24 00:35:25.047163 containerd[1570]: time="2026-04-24T00:35:25.046770577Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 24 00:35:25.047499 containerd[1570]: time="2026-04-24T00:35:25.047360448Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 24 00:35:25.047499 containerd[1570]: time="2026-04-24T00:35:25.047441762Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 24 00:35:25.047499 containerd[1570]: time="2026-04-24T00:35:25.047451694Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 24 00:35:25.047499 containerd[1570]: time="2026-04-24T00:35:25.047460076Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 24 00:35:25.047499 containerd[1570]: time="2026-04-24T00:35:25.047467337Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 24 00:35:25.047499 containerd[1570]: time="2026-04-24T00:35:25.047482486Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 24 00:35:25.047499 containerd[1570]: time="2026-04-24T00:35:25.047491429Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 24 00:35:25.047499 containerd[1570]: time="2026-04-24T00:35:25.047498764Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 24 00:35:25.047616 containerd[1570]: time="2026-04-24T00:35:25.047506038Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 24 00:35:25.047616 containerd[1570]: time="2026-04-24T00:35:25.047513154Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 24 00:35:25.047616 containerd[1570]: time="2026-04-24T00:35:25.047523236Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 24 00:35:25.047616 containerd[1570]: time="2026-04-24T00:35:25.047610333Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 24 00:35:25.047667 containerd[1570]: time="2026-04-24T00:35:25.047623772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 24 00:35:25.047667 containerd[1570]: time="2026-04-24T00:35:25.047633546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 24 00:35:25.047667 containerd[1570]: time="2026-04-24T00:35:25.047642798Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 24 00:35:25.047667 containerd[1570]: time="2026-04-24T00:35:25.047651089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 24 00:35:25.047667 containerd[1570]: time="2026-04-24T00:35:25.047658330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 24 00:35:25.047667 containerd[1570]: time="2026-04-24T00:35:25.047665898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 24 00:35:25.047746 containerd[1570]: time="2026-04-24T00:35:25.047672729Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 24 00:35:25.047746 containerd[1570]: time="2026-04-24T00:35:25.047680928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 24 00:35:25.047746 containerd[1570]: time="2026-04-24T00:35:25.047688457Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 24 00:35:25.047746 containerd[1570]: time="2026-04-24T00:35:25.047695373Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 24 00:35:25.047746 containerd[1570]: time="2026-04-24T00:35:25.047727297Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 24 00:35:25.047746 containerd[1570]: time="2026-04-24T00:35:25.047735875Z" level=info msg="Start snapshots syncer" Apr 24 00:35:25.048345 containerd[1570]: time="2026-04-24T00:35:25.048254994Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 24 00:35:25.048667 containerd[1570]: time="2026-04-24T00:35:25.048519792Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 24 00:35:25.048667 containerd[1570]: time="2026-04-24T00:35:25.048631593Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 24 00:35:25.048967 containerd[1570]: time="2026-04-24T00:35:25.048729045Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 24 00:35:25.048967 containerd[1570]: time="2026-04-24T00:35:25.048797258Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 24 00:35:25.048967 containerd[1570]: time="2026-04-24T00:35:25.048812193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 24 00:35:25.048967 containerd[1570]: time="2026-04-24T00:35:25.048820095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 24 00:35:25.048967 containerd[1570]: time="2026-04-24T00:35:25.048826418Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 24 00:35:25.048967 containerd[1570]: time="2026-04-24T00:35:25.048834789Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 24 00:35:25.048967 containerd[1570]: time="2026-04-24T00:35:25.048841126Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 24 00:35:25.049126 containerd[1570]: time="2026-04-24T00:35:25.049106647Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 24 00:35:25.049143 containerd[1570]: time="2026-04-24T00:35:25.049127034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 24 00:35:25.049143 containerd[1570]: time="2026-04-24T00:35:25.049134420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 24 00:35:25.049143 containerd[1570]: time="2026-04-24T00:35:25.049141418Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 24 00:35:25.049580 containerd[1570]: time="2026-04-24T00:35:25.049449829Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 24 00:35:25.049580 containerd[1570]: time="2026-04-24T00:35:25.049536083Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 24 00:35:25.049580 containerd[1570]: time="2026-04-24T00:35:25.049544682Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 24 00:35:25.049580 containerd[1570]: time="2026-04-24T00:35:25.049551287Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 24 00:35:25.049580 containerd[1570]: time="2026-04-24T00:35:25.049556420Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 24 00:35:25.049580 containerd[1570]: time="2026-04-24T00:35:25.049562987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 24 00:35:25.049580 containerd[1570]: time="2026-04-24T00:35:25.049579551Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 24 00:35:25.049804 containerd[1570]: time="2026-04-24T00:35:25.049591669Z" level=info msg="runtime interface created" Apr 24 00:35:25.049804 containerd[1570]: time="2026-04-24T00:35:25.049796285Z" level=info msg="created NRI interface" Apr 24 00:35:25.049835 containerd[1570]: time="2026-04-24T00:35:25.049809799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 24 00:35:25.049835 containerd[1570]: time="2026-04-24T00:35:25.049829515Z" level=info msg="Connect containerd service" Apr 24 00:35:25.050189 containerd[1570]: time="2026-04-24T00:35:25.050114120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 00:35:25.051158 containerd[1570]: time="2026-04-24T00:35:25.050811764Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 00:35:25.155393 containerd[1570]: time="2026-04-24T00:35:25.155231681Z" level=info msg="Start subscribing containerd event" Apr 24 00:35:25.155393 containerd[1570]: time="2026-04-24T00:35:25.155367337Z" level=info msg="Start recovering state" Apr 24 00:35:25.156191 containerd[1570]: time="2026-04-24T00:35:25.155708586Z" level=info msg="Start event monitor" Apr 24 00:35:25.156191 containerd[1570]: time="2026-04-24T00:35:25.155795461Z" level=info msg="Start cni network conf syncer for default" Apr 24 00:35:25.156191 containerd[1570]: time="2026-04-24T00:35:25.155802922Z" level=info msg="Start streaming server" Apr 24 00:35:25.156191 containerd[1570]: time="2026-04-24T00:35:25.155813848Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 24 00:35:25.156191 containerd[1570]: time="2026-04-24T00:35:25.155819911Z" level=info msg="runtime interface starting up..." Apr 24 00:35:25.156191 containerd[1570]: time="2026-04-24T00:35:25.155824015Z" level=info msg="starting plugins..." Apr 24 00:35:25.156191 containerd[1570]: time="2026-04-24T00:35:25.155834476Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 24 00:35:25.157264 containerd[1570]: time="2026-04-24T00:35:25.156768426Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 00:35:25.157460 containerd[1570]: time="2026-04-24T00:35:25.157362673Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 00:35:25.160351 containerd[1570]: time="2026-04-24T00:35:25.160232678Z" level=info msg="containerd successfully booted in 0.137474s" Apr 24 00:35:25.160486 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 00:35:25.211287 tar[1569]: linux-amd64/README.md Apr 24 00:35:25.217252 sshd_keygen[1566]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 00:35:25.232767 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 00:35:25.260383 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 00:35:25.270193 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 00:35:25.299377 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 00:35:25.299805 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 00:35:25.310768 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 00:35:25.333758 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 00:35:25.343063 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:34232.service - OpenSSH per-connection server daemon (10.0.0.1:34232). Apr 24 00:35:25.354598 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 00:35:25.367536 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 00:35:25.376629 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 00:35:25.385284 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 00:35:25.434960 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 34232 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:35:25.437626 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:35:25.447272 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 00:35:25.456571 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 00:35:25.475169 systemd-logind[1558]: New session 1 of user core. Apr 24 00:35:25.490821 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 00:35:25.503201 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 00:35:25.529614 (systemd)[1655]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 00:35:25.534659 systemd-logind[1558]: New session c1 of user core. Apr 24 00:35:25.659390 systemd[1655]: Queued start job for default target default.target. Apr 24 00:35:25.668831 systemd[1655]: Created slice app.slice - User Application Slice. Apr 24 00:35:25.669208 systemd[1655]: Reached target paths.target - Paths. Apr 24 00:35:25.669351 systemd[1655]: Reached target timers.target - Timers. Apr 24 00:35:25.670678 systemd[1655]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 00:35:25.687353 systemd[1655]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 00:35:25.687500 systemd[1655]: Reached target sockets.target - Sockets. Apr 24 00:35:25.687646 systemd[1655]: Reached target basic.target - Basic System. Apr 24 00:35:25.687725 systemd[1655]: Reached target default.target - Main User Target. Apr 24 00:35:25.687742 systemd[1655]: Startup finished in 143ms. Apr 24 00:35:25.687762 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 00:35:25.697615 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 00:35:25.718247 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:34248.service - OpenSSH per-connection server daemon (10.0.0.1:34248). Apr 24 00:35:25.790276 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 34248 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:35:25.791092 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:35:25.797400 systemd-logind[1558]: New session 2 of user core. Apr 24 00:35:25.807165 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 00:35:25.834245 sshd[1669]: Connection closed by 10.0.0.1 port 34248 Apr 24 00:35:25.835452 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Apr 24 00:35:25.842778 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:34248.service: Deactivated successfully. Apr 24 00:35:25.844326 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 00:35:25.845446 systemd-logind[1558]: Session 2 logged out. Waiting for processes to exit. Apr 24 00:35:25.847539 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:34264.service - OpenSSH per-connection server daemon (10.0.0.1:34264). Apr 24 00:35:25.859191 systemd-logind[1558]: Removed session 2. Apr 24 00:35:25.920749 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 34264 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:35:25.921803 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:35:25.928979 systemd-logind[1558]: New session 3 of user core. Apr 24 00:35:25.938203 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 00:35:25.965647 sshd[1678]: Connection closed by 10.0.0.1 port 34264 Apr 24 00:35:25.966070 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Apr 24 00:35:25.970138 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:34264.service: Deactivated successfully. Apr 24 00:35:25.971764 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 00:35:25.973159 systemd-logind[1558]: Session 3 logged out. Waiting for processes to exit. Apr 24 00:35:25.975361 systemd-logind[1558]: Removed session 3. Apr 24 00:35:26.303103 systemd-networkd[1483]: eth0: Gained IPv6LL Apr 24 00:35:26.306821 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 00:35:26.317421 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 00:35:26.328638 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 24 00:35:26.347151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:35:26.356713 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 00:35:26.404708 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 00:35:26.415190 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 24 00:35:26.415557 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 24 00:35:26.425563 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 00:35:27.422396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:35:27.430146 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:35:27.433378 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 00:35:27.444051 systemd[1]: Startup finished in 5.002s (kernel) + 10.042s (initrd) + 7.355s (userspace) = 22.399s. Apr 24 00:35:28.104615 kubelet[1706]: E0424 00:35:28.104436 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:35:28.107365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:35:28.107551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:35:28.108333 systemd[1]: kubelet.service: Consumed 1.069s CPU time, 256.1M memory peak. Apr 24 00:35:35.985563 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:44046.service - OpenSSH per-connection server daemon (10.0.0.1:44046). Apr 24 00:35:36.054720 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 44046 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:35:36.056145 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:35:36.064375 systemd-logind[1558]: New session 4 of user core. Apr 24 00:35:36.078395 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 00:35:36.100664 sshd[1722]: Connection closed by 10.0.0.1 port 44046 Apr 24 00:35:36.101838 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Apr 24 00:35:36.109792 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:44046.service: Deactivated successfully. Apr 24 00:35:36.111645 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 00:35:36.113373 systemd-logind[1558]: Session 4 logged out. Waiting for processes to exit. Apr 24 00:35:36.116639 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:44052.service - OpenSSH per-connection server daemon (10.0.0.1:44052). Apr 24 00:35:36.119600 systemd-logind[1558]: Removed session 4. Apr 24 00:35:36.188838 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 44052 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:35:36.190208 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:35:36.198599 systemd-logind[1558]: New session 5 of user core. Apr 24 00:35:36.208299 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 00:35:36.222379 sshd[1731]: Connection closed by 10.0.0.1 port 44052 Apr 24 00:35:36.222573 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Apr 24 00:35:36.241117 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:44052.service: Deactivated successfully. Apr 24 00:35:36.243274 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 00:35:36.245613 systemd-logind[1558]: Session 5 logged out. Waiting for processes to exit. Apr 24 00:35:36.248653 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:44062.service - OpenSSH per-connection server daemon (10.0.0.1:44062). Apr 24 00:35:36.250748 systemd-logind[1558]: Removed session 5. Apr 24 00:35:36.316549 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 44062 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:35:36.318450 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:35:36.326477 systemd-logind[1558]: New session 6 of user core. Apr 24 00:35:36.333467 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 00:35:36.356492 sshd[1740]: Connection closed by 10.0.0.1 port 44062 Apr 24 00:35:36.356995 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Apr 24 00:35:36.367398 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:44062.service: Deactivated successfully. Apr 24 00:35:36.369229 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 00:35:36.370627 systemd-logind[1558]: Session 6 logged out. Waiting for processes to exit. Apr 24 00:35:36.373384 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:44074.service - OpenSSH per-connection server daemon (10.0.0.1:44074). Apr 24 00:35:36.375542 systemd-logind[1558]: Removed session 6. Apr 24 00:35:36.445366 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 44074 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:35:36.446766 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:35:36.454592 systemd-logind[1558]: New session 7 of user core. Apr 24 00:35:36.463191 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 00:35:36.493125 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 00:35:36.493373 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:35:36.513205 sudo[1750]: pam_unix(sudo:session): session closed for user root Apr 24 00:35:36.515550 sshd[1749]: Connection closed by 10.0.0.1 port 44074 Apr 24 00:35:36.515719 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Apr 24 00:35:36.529200 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:44074.service: Deactivated successfully. Apr 24 00:35:36.530760 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 00:35:36.534370 systemd-logind[1558]: Session 7 logged out. Waiting for processes to exit. Apr 24 00:35:36.536001 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:44084.service - OpenSSH per-connection server daemon (10.0.0.1:44084). Apr 24 00:35:36.540342 systemd-logind[1558]: Removed session 7. Apr 24 00:35:36.605311 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 44084 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:35:36.606718 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:35:36.615402 systemd-logind[1558]: New session 8 of user core. Apr 24 00:35:36.625567 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 00:35:36.689227 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 00:35:36.689541 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:35:36.699684 sudo[1761]: pam_unix(sudo:session): session closed for user root Apr 24 00:35:36.708730 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 24 00:35:36.709220 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:35:36.725268 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 24 00:35:36.813661 augenrules[1783]: No rules Apr 24 00:35:36.815563 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 00:35:36.816220 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 24 00:35:36.817468 sudo[1760]: pam_unix(sudo:session): session closed for user root Apr 24 00:35:36.819793 sshd[1759]: Connection closed by 10.0.0.1 port 44084 Apr 24 00:35:36.820131 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Apr 24 00:35:36.833557 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:44084.service: Deactivated successfully. Apr 24 00:35:36.835333 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 00:35:36.836666 systemd-logind[1558]: Session 8 logged out. Waiting for processes to exit. Apr 24 00:35:36.839176 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:44096.service - OpenSSH per-connection server daemon (10.0.0.1:44096). Apr 24 00:35:36.841767 systemd-logind[1558]: Removed session 8. Apr 24 00:35:36.920237 sshd[1792]: Accepted publickey for core from 10.0.0.1 port 44096 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:35:36.922639 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:35:36.930265 systemd-logind[1558]: New session 9 of user core. Apr 24 00:35:36.944309 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 00:35:36.965200 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 00:35:36.965490 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:35:39.278687 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 00:35:39.283157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:35:40.122200 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1010250878 wd_nsec: 1010250866 Apr 24 00:35:40.169267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:35:40.185611 (kubelet)[1826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:35:40.933241 kubelet[1826]: E0424 00:35:40.932995 1826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:35:40.939983 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:35:40.940385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:35:40.940777 systemd[1]: kubelet.service: Consumed 1.758s CPU time, 111.2M memory peak. Apr 24 00:35:41.646031 kernel: hrtimer: interrupt took 2831075 ns Apr 24 00:35:44.023739 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 00:35:44.045777 (dockerd)[1835]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 00:35:48.590387 dockerd[1835]: time="2026-04-24T00:35:48.588631349Z" level=info msg="Starting up" Apr 24 00:35:48.597419 dockerd[1835]: time="2026-04-24T00:35:48.597326917Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 24 00:35:48.734360 dockerd[1835]: time="2026-04-24T00:35:48.733761112Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 24 00:35:48.849520 dockerd[1835]: time="2026-04-24T00:35:48.849286048Z" level=info msg="Loading containers: start." Apr 24 00:35:48.888187 kernel: Initializing XFRM netlink socket Apr 24 00:35:50.354042 systemd-networkd[1483]: docker0: Link UP Apr 24 00:35:50.368157 dockerd[1835]: time="2026-04-24T00:35:50.367982128Z" level=info msg="Loading containers: done." Apr 24 00:35:51.047489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 24 00:35:51.052183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:35:51.647551 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1298756681-merged.mount: Deactivated successfully. Apr 24 00:35:51.652801 dockerd[1835]: time="2026-04-24T00:35:51.652551302Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 00:35:51.655505 dockerd[1835]: time="2026-04-24T00:35:51.655390731Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 24 00:35:51.656672 dockerd[1835]: time="2026-04-24T00:35:51.656526839Z" level=info msg="Initializing buildkit" Apr 24 00:35:51.807462 dockerd[1835]: time="2026-04-24T00:35:51.807356220Z" level=info msg="Completed buildkit initialization" Apr 24 00:35:51.820649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:35:51.833262 (kubelet)[2047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:35:51.925757 dockerd[1835]: time="2026-04-24T00:35:51.925523003Z" level=info msg="Daemon has completed initialization" Apr 24 00:35:51.927199 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 00:35:51.927306 dockerd[1835]: time="2026-04-24T00:35:51.926722983Z" level=info msg="API listen on /run/docker.sock" Apr 24 00:35:52.069068 kubelet[2047]: E0424 00:35:52.068723 2047 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:35:52.072586 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:35:52.072799 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:35:52.073614 systemd[1]: kubelet.service: Consumed 924ms CPU time, 110.9M memory peak. Apr 24 00:35:55.903827 containerd[1570]: time="2026-04-24T00:35:55.903703775Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 24 00:35:56.856671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3763680276.mount: Deactivated successfully. Apr 24 00:36:02.397409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 24 00:36:02.401454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:36:02.851585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:36:02.899513 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:36:03.750173 containerd[1570]: time="2026-04-24T00:36:03.749517266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:03.755333 containerd[1570]: time="2026-04-24T00:36:03.755146742Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 24 00:36:03.758782 containerd[1570]: time="2026-04-24T00:36:03.758565221Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:03.764290 containerd[1570]: time="2026-04-24T00:36:03.764168985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:03.767406 containerd[1570]: time="2026-04-24T00:36:03.767215017Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 7.863402855s" Apr 24 00:36:03.767406 containerd[1570]: time="2026-04-24T00:36:03.767254742Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 24 00:36:03.772670 containerd[1570]: time="2026-04-24T00:36:03.772623724Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 24 00:36:04.153563 kubelet[2139]: E0424 00:36:04.153333 2139 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:36:04.160385 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:36:04.160599 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:36:04.161798 systemd[1]: kubelet.service: Consumed 2.233s CPU time, 112.2M memory peak. Apr 24 00:36:08.995983 containerd[1570]: time="2026-04-24T00:36:08.995476068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:08.998452 containerd[1570]: time="2026-04-24T00:36:08.998350682Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 24 00:36:09.000252 containerd[1570]: time="2026-04-24T00:36:09.000151371Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:09.007284 containerd[1570]: time="2026-04-24T00:36:09.007090720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:09.008361 containerd[1570]: time="2026-04-24T00:36:09.008236272Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 5.234951839s" Apr 24 00:36:09.008361 containerd[1570]: time="2026-04-24T00:36:09.008335578Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 24 00:36:09.011254 containerd[1570]: time="2026-04-24T00:36:09.010635891Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 24 00:36:10.064311 update_engine[1559]: I20260424 00:36:10.064199 1559 update_attempter.cc:509] Updating boot flags... Apr 24 00:36:10.599312 containerd[1570]: time="2026-04-24T00:36:10.599187905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:10.603353 containerd[1570]: time="2026-04-24T00:36:10.602622496Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 24 00:36:10.605110 containerd[1570]: time="2026-04-24T00:36:10.605086086Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:10.609827 containerd[1570]: time="2026-04-24T00:36:10.609624579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:10.610502 containerd[1570]: time="2026-04-24T00:36:10.610412609Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 1.599677805s" Apr 24 00:36:10.610532 containerd[1570]: time="2026-04-24T00:36:10.610502756Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 24 00:36:10.613252 containerd[1570]: time="2026-04-24T00:36:10.612834935Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 24 00:36:11.894740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4293706584.mount: Deactivated successfully. Apr 24 00:36:12.702799 containerd[1570]: time="2026-04-24T00:36:12.702540346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:12.703977 containerd[1570]: time="2026-04-24T00:36:12.703756085Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 24 00:36:12.706273 containerd[1570]: time="2026-04-24T00:36:12.706129583Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:12.712421 containerd[1570]: time="2026-04-24T00:36:12.712257884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:12.713260 containerd[1570]: time="2026-04-24T00:36:12.712763319Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 2.099543221s" Apr 24 00:36:12.713315 containerd[1570]: time="2026-04-24T00:36:12.713279690Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 24 00:36:12.717171 containerd[1570]: time="2026-04-24T00:36:12.716988286Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 24 00:36:13.336458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2585881394.mount: Deactivated successfully. Apr 24 00:36:14.296487 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 24 00:36:14.298702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:36:14.545293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:36:14.556514 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:36:14.681063 kubelet[2244]: E0424 00:36:14.680739 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:36:14.683314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:36:14.683538 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:36:14.685631 systemd[1]: kubelet.service: Consumed 308ms CPU time, 112.7M memory peak. Apr 24 00:36:16.687358 containerd[1570]: time="2026-04-24T00:36:16.686536790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:16.689547 containerd[1570]: time="2026-04-24T00:36:16.689133208Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 24 00:36:16.692538 containerd[1570]: time="2026-04-24T00:36:16.692397296Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:16.699531 containerd[1570]: time="2026-04-24T00:36:16.699295947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:16.700787 containerd[1570]: time="2026-04-24T00:36:16.700694391Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 3.983535756s" Apr 24 00:36:16.700787 containerd[1570]: time="2026-04-24T00:36:16.700736491Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 24 00:36:16.707348 containerd[1570]: time="2026-04-24T00:36:16.706453004Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 24 00:36:17.307137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4050326781.mount: Deactivated successfully. Apr 24 00:36:17.341351 containerd[1570]: time="2026-04-24T00:36:17.341088879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:17.342828 containerd[1570]: time="2026-04-24T00:36:17.342545532Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 24 00:36:17.345606 containerd[1570]: time="2026-04-24T00:36:17.345400390Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:17.356608 containerd[1570]: time="2026-04-24T00:36:17.356389666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:17.359063 containerd[1570]: time="2026-04-24T00:36:17.358801482Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 652.296653ms" Apr 24 00:36:17.359320 containerd[1570]: time="2026-04-24T00:36:17.359208004Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 24 00:36:17.366682 containerd[1570]: time="2026-04-24T00:36:17.366556797Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 24 00:36:17.995368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3503646527.mount: Deactivated successfully. Apr 24 00:36:23.708158 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2152026274 wd_nsec: 2152026389 Apr 24 00:36:24.818617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 24 00:36:24.873361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:36:25.691813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:36:25.730707 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:36:25.851563 containerd[1570]: time="2026-04-24T00:36:25.851343099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:25.853327 containerd[1570]: time="2026-04-24T00:36:25.853249455Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 24 00:36:25.856488 containerd[1570]: time="2026-04-24T00:36:25.856127121Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:25.868359 containerd[1570]: time="2026-04-24T00:36:25.868315113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:36:25.872626 containerd[1570]: time="2026-04-24T00:36:25.872568756Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 8.505899255s" Apr 24 00:36:25.873200 containerd[1570]: time="2026-04-24T00:36:25.872742728Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 24 00:36:26.540485 kubelet[2320]: E0424 00:36:26.540210 2320 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:36:26.549822 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:36:26.550435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:36:26.552371 systemd[1]: kubelet.service: Consumed 1.389s CPU time, 111M memory peak. Apr 24 00:36:30.548629 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:36:30.549289 systemd[1]: kubelet.service: Consumed 1.389s CPU time, 111M memory peak. Apr 24 00:36:30.580210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:36:30.662146 systemd[1]: Reload requested from client PID 2367 ('systemctl') (unit session-9.scope)... Apr 24 00:36:30.662237 systemd[1]: Reloading... Apr 24 00:36:31.221404 zram_generator::config[2406]: No configuration found. Apr 24 00:36:31.637825 systemd[1]: Reloading finished in 975 ms. Apr 24 00:36:31.769756 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 00:36:31.770442 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 00:36:31.771679 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:36:31.771801 systemd[1]: kubelet.service: Consumed 555ms CPU time, 98.1M memory peak. Apr 24 00:36:31.776244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:36:32.180347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:36:32.199828 (kubelet)[2458]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 00:36:32.526773 kubelet[2458]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 00:36:32.802785 kubelet[2458]: I0424 00:36:32.802404 2458 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 24 00:36:32.802785 kubelet[2458]: I0424 00:36:32.802712 2458 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 00:36:32.802785 kubelet[2458]: I0424 00:36:32.802734 2458 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 00:36:32.802785 kubelet[2458]: I0424 00:36:32.802762 2458 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 00:36:32.803494 kubelet[2458]: I0424 00:36:32.803379 2458 server.go:951] "Client rotation is on, will bootstrap in background" Apr 24 00:36:32.887323 kubelet[2458]: E0424 00:36:32.887182 2458 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 00:36:32.928460 kubelet[2458]: I0424 00:36:32.927387 2458 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 00:36:32.961497 kubelet[2458]: I0424 00:36:32.961371 2458 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 24 00:36:33.003807 kubelet[2458]: I0424 00:36:33.003672 2458 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 00:36:33.015425 kubelet[2458]: I0424 00:36:33.014525 2458 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 00:36:33.016231 kubelet[2458]: I0424 00:36:33.015384 2458 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 00:36:33.017469 kubelet[2458]: I0424 00:36:33.016245 2458 topology_manager.go:143] "Creating topology manager with none policy" Apr 24 00:36:33.017469 kubelet[2458]: I0424 00:36:33.016262 2458 container_manager_linux.go:308] "Creating device plugin manager" Apr 24 00:36:33.017469 kubelet[2458]: I0424 00:36:33.016596 2458 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 00:36:33.027812 kubelet[2458]: I0424 00:36:33.027593 2458 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 24 00:36:33.029293 kubelet[2458]: I0424 00:36:33.028690 2458 kubelet.go:482] "Attempting to sync node with API server" Apr 24 00:36:33.029293 kubelet[2458]: I0424 00:36:33.028840 2458 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 00:36:33.029293 kubelet[2458]: I0424 00:36:33.029154 2458 kubelet.go:394] "Adding apiserver pod source" Apr 24 00:36:33.029807 kubelet[2458]: I0424 00:36:33.029667 2458 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 00:36:33.074352 kubelet[2458]: I0424 00:36:33.072132 2458 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 24 00:36:33.079758 kubelet[2458]: I0424 00:36:33.079581 2458 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 00:36:33.079990 kubelet[2458]: I0424 00:36:33.079839 2458 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 00:36:33.080338 kubelet[2458]: W0424 00:36:33.080138 2458 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 00:36:33.096099 kubelet[2458]: I0424 00:36:33.096081 2458 server.go:1257] "Started kubelet" Apr 24 00:36:33.100754 kubelet[2458]: I0424 00:36:33.099182 2458 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 00:36:33.101716 kubelet[2458]: I0424 00:36:33.101147 2458 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 00:36:33.102598 kubelet[2458]: I0424 00:36:33.098114 2458 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 00:36:33.113430 kubelet[2458]: I0424 00:36:33.113414 2458 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 00:36:33.115366 kubelet[2458]: I0424 00:36:33.115343 2458 server.go:317] "Adding debug handlers to kubelet server" Apr 24 00:36:33.117605 kubelet[2458]: I0424 00:36:33.117509 2458 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 24 00:36:33.118362 kubelet[2458]: I0424 00:36:33.118350 2458 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 00:36:33.123562 kubelet[2458]: E0424 00:36:33.122260 2458 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 00:36:33.123562 kubelet[2458]: I0424 00:36:33.122289 2458 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 24 00:36:33.123562 kubelet[2458]: I0424 00:36:33.122832 2458 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 00:36:33.124362 kubelet[2458]: I0424 00:36:33.124350 2458 reconciler.go:29] "Reconciler: start to sync state" Apr 24 00:36:33.124528 kubelet[2458]: E0424 00:36:33.124498 2458 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" Apr 24 00:36:33.140718 kubelet[2458]: E0424 00:36:33.140613 2458 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 00:36:33.142516 kubelet[2458]: I0424 00:36:33.142493 2458 factory.go:223] Registration of the containerd container factory successfully Apr 24 00:36:33.142722 kubelet[2458]: I0424 00:36:33.142613 2458 factory.go:223] Registration of the systemd container factory successfully Apr 24 00:36:33.143188 kubelet[2458]: I0424 00:36:33.142803 2458 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 00:36:33.162457 kubelet[2458]: E0424 00:36:33.156620 2458 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a923e4b08ac780 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-24 00:36:33.095772032 +0000 UTC m=+0.841587949,LastTimestamp:2026-04-24 00:36:33.095772032 +0000 UTC m=+0.841587949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 24 00:36:33.213120 kubelet[2458]: I0424 00:36:33.211814 2458 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 00:36:33.222649 kubelet[2458]: E0424 00:36:33.222512 2458 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 00:36:33.234180 kubelet[2458]: I0424 00:36:33.233126 2458 cpu_manager.go:225] "Starting" policy="none" Apr 24 00:36:33.234180 kubelet[2458]: I0424 00:36:33.233137 2458 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 24 00:36:33.234180 kubelet[2458]: I0424 00:36:33.233153 2458 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 24 00:36:33.238632 kubelet[2458]: I0424 00:36:33.238608 2458 policy_none.go:50] "Start" Apr 24 00:36:33.240406 kubelet[2458]: I0424 00:36:33.240385 2458 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 00:36:33.240596 kubelet[2458]: I0424 00:36:33.240584 2458 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 00:36:33.244828 kubelet[2458]: I0424 00:36:33.244707 2458 policy_none.go:44] "Start" Apr 24 00:36:33.256669 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 00:36:33.290312 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 00:36:33.305479 kubelet[2458]: I0424 00:36:33.304274 2458 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 00:36:33.305479 kubelet[2458]: I0424 00:36:33.304304 2458 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 24 00:36:33.305479 kubelet[2458]: I0424 00:36:33.304404 2458 kubelet.go:2501] "Starting kubelet main sync loop" Apr 24 00:36:33.305479 kubelet[2458]: E0424 00:36:33.304461 2458 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 00:36:33.305343 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 00:36:33.315681 kubelet[2458]: E0424 00:36:33.315287 2458 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 00:36:33.316420 kubelet[2458]: I0424 00:36:33.316333 2458 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 24 00:36:33.316552 kubelet[2458]: I0424 00:36:33.316423 2458 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 00:36:33.318557 kubelet[2458]: I0424 00:36:33.317395 2458 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 24 00:36:33.322764 kubelet[2458]: E0424 00:36:33.322648 2458 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 00:36:33.322764 kubelet[2458]: E0424 00:36:33.322677 2458 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 24 00:36:33.326131 kubelet[2458]: E0424 00:36:33.325699 2458 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" Apr 24 00:36:33.427497 kubelet[2458]: I0424 00:36:33.426697 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06a2b53232f567f4e60f1070c7a6500d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06a2b53232f567f4e60f1070c7a6500d\") " pod="kube-system/kube-apiserver-localhost" Apr 24 00:36:33.431116 kubelet[2458]: I0424 00:36:33.430406 2458 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 00:36:33.434523 kubelet[2458]: E0424 00:36:33.433796 2458 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Apr 24 00:36:33.471371 systemd[1]: Created slice kubepods-burstable-pod06a2b53232f567f4e60f1070c7a6500d.slice - libcontainer container kubepods-burstable-pod06a2b53232f567f4e60f1070c7a6500d.slice. Apr 24 00:36:33.489301 kubelet[2458]: E0424 00:36:33.489249 2458 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:36:33.535242 kubelet[2458]: I0424 00:36:33.534752 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:36:33.535242 kubelet[2458]: I0424 00:36:33.535114 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:36:33.535242 kubelet[2458]: I0424 00:36:33.535144 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:36:33.535242 kubelet[2458]: I0424 00:36:33.535199 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:36:33.535242 kubelet[2458]: I0424 00:36:33.535224 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:36:33.535845 kubelet[2458]: I0424 00:36:33.535243 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 24 00:36:33.535845 kubelet[2458]: I0424 00:36:33.535549 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06a2b53232f567f4e60f1070c7a6500d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06a2b53232f567f4e60f1070c7a6500d\") " pod="kube-system/kube-apiserver-localhost" Apr 24 00:36:33.535845 kubelet[2458]: I0424 00:36:33.535666 2458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06a2b53232f567f4e60f1070c7a6500d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06a2b53232f567f4e60f1070c7a6500d\") " pod="kube-system/kube-apiserver-localhost" Apr 24 00:36:33.539776 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 24 00:36:33.548590 kubelet[2458]: E0424 00:36:33.548428 2458 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:36:33.564229 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 24 00:36:33.569805 kubelet[2458]: E0424 00:36:33.569597 2458 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:36:33.642635 kubelet[2458]: I0424 00:36:33.642299 2458 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 00:36:33.644791 kubelet[2458]: E0424 00:36:33.644667 2458 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Apr 24 00:36:33.728719 kubelet[2458]: E0424 00:36:33.727425 2458 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" Apr 24 00:36:33.799250 kubelet[2458]: E0424 00:36:33.798672 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:33.806681 containerd[1570]: time="2026-04-24T00:36:33.806569349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06a2b53232f567f4e60f1070c7a6500d,Namespace:kube-system,Attempt:0,}" Apr 24 00:36:33.870728 kubelet[2458]: E0424 00:36:33.870420 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:33.872306 containerd[1570]: time="2026-04-24T00:36:33.872004147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 24 00:36:33.876971 kubelet[2458]: E0424 00:36:33.876566 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:33.878678 containerd[1570]: time="2026-04-24T00:36:33.878582903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 24 00:36:34.122460 kubelet[2458]: I0424 00:36:34.115293 2458 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 00:36:34.145540 kubelet[2458]: E0424 00:36:34.145465 2458 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Apr 24 00:36:34.331307 kubelet[2458]: E0424 00:36:34.330434 2458 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a923e4b08ac780 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-24 00:36:33.095772032 +0000 UTC m=+0.841587949,LastTimestamp:2026-04-24 00:36:33.095772032 +0000 UTC m=+0.841587949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 24 00:36:34.549568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1270118215.mount: Deactivated successfully. Apr 24 00:36:34.553590 kubelet[2458]: E0424 00:36:34.553185 2458 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="1.6s" Apr 24 00:36:34.568777 containerd[1570]: time="2026-04-24T00:36:34.568537380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:36:34.573684 containerd[1570]: time="2026-04-24T00:36:34.573532331Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 24 00:36:34.587306 containerd[1570]: time="2026-04-24T00:36:34.587111698Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:36:34.589747 containerd[1570]: time="2026-04-24T00:36:34.589634461Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 24 00:36:34.591507 containerd[1570]: time="2026-04-24T00:36:34.591375996Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:36:34.593217 containerd[1570]: time="2026-04-24T00:36:34.593139454Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 24 00:36:34.596678 containerd[1570]: time="2026-04-24T00:36:34.596558779Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:36:34.598494 containerd[1570]: time="2026-04-24T00:36:34.598320041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:36:34.603559 containerd[1570]: time="2026-04-24T00:36:34.603262504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 791.668868ms" Apr 24 00:36:34.622279 containerd[1570]: time="2026-04-24T00:36:34.621340614Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 744.892596ms" Apr 24 00:36:34.629661 containerd[1570]: time="2026-04-24T00:36:34.628184557Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 741.539467ms" Apr 24 00:36:34.762116 containerd[1570]: time="2026-04-24T00:36:34.744649597Z" level=info msg="connecting to shim 335a8f8215bb6ad8724ec2a6b57399a66de39b9cbbe35a30cae686dc7b0fd3af" address="unix:///run/containerd/s/10e60273798b5b1fcc394378655d372dcd5b9f88d34a76c73e05afbade02110e" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:36:34.763537 containerd[1570]: time="2026-04-24T00:36:34.763415216Z" level=info msg="connecting to shim c57ceee4282bd3e942797bae4868969cef2d4e88770c579d035321d2d7089226" address="unix:///run/containerd/s/97bc58408e81eadf2df47d3b43452fc839e7c18c7a9fd77c41fbc64fb6f7a86e" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:36:34.899317 containerd[1570]: time="2026-04-24T00:36:34.898204748Z" level=info msg="connecting to shim e71f5b7eff86de8ad8b88e29f4efc3c66b9214606076b1dd141dda1605a1531c" address="unix:///run/containerd/s/93303d79d184d6322139230b27f1755e07a674cf3d2bcb8863ccfe055bf9ef18" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:36:34.952523 systemd[1]: Started cri-containerd-335a8f8215bb6ad8724ec2a6b57399a66de39b9cbbe35a30cae686dc7b0fd3af.scope - libcontainer container 335a8f8215bb6ad8724ec2a6b57399a66de39b9cbbe35a30cae686dc7b0fd3af. Apr 24 00:36:34.975262 kubelet[2458]: I0424 00:36:34.973846 2458 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 00:36:34.984232 kubelet[2458]: E0424 00:36:34.983433 2458 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Apr 24 00:36:35.078786 kubelet[2458]: E0424 00:36:35.077492 2458 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 00:36:35.265653 systemd[1]: Started cri-containerd-c57ceee4282bd3e942797bae4868969cef2d4e88770c579d035321d2d7089226.scope - libcontainer container c57ceee4282bd3e942797bae4868969cef2d4e88770c579d035321d2d7089226. Apr 24 00:36:35.321463 systemd[1]: Started cri-containerd-e71f5b7eff86de8ad8b88e29f4efc3c66b9214606076b1dd141dda1605a1531c.scope - libcontainer container e71f5b7eff86de8ad8b88e29f4efc3c66b9214606076b1dd141dda1605a1531c. Apr 24 00:36:35.506333 containerd[1570]: time="2026-04-24T00:36:35.505816174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06a2b53232f567f4e60f1070c7a6500d,Namespace:kube-system,Attempt:0,} returns sandbox id \"335a8f8215bb6ad8724ec2a6b57399a66de39b9cbbe35a30cae686dc7b0fd3af\"" Apr 24 00:36:35.538335 kubelet[2458]: E0424 00:36:35.537493 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:35.792700 containerd[1570]: time="2026-04-24T00:36:35.792063425Z" level=info msg="CreateContainer within sandbox \"335a8f8215bb6ad8724ec2a6b57399a66de39b9cbbe35a30cae686dc7b0fd3af\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 00:36:35.797099 containerd[1570]: time="2026-04-24T00:36:35.797058052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"e71f5b7eff86de8ad8b88e29f4efc3c66b9214606076b1dd141dda1605a1531c\"" Apr 24 00:36:35.804077 kubelet[2458]: E0424 00:36:35.803950 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:35.811133 containerd[1570]: time="2026-04-24T00:36:35.808296174Z" level=info msg="Container 1a280f85f11c0e3cdcda41d767bc3d927bad254155329620bc33a2a0a495cced: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:36:35.817108 containerd[1570]: time="2026-04-24T00:36:35.817068631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c57ceee4282bd3e942797bae4868969cef2d4e88770c579d035321d2d7089226\"" Apr 24 00:36:35.821012 kubelet[2458]: E0424 00:36:35.820749 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:35.825353 containerd[1570]: time="2026-04-24T00:36:35.825146006Z" level=info msg="CreateContainer within sandbox \"e71f5b7eff86de8ad8b88e29f4efc3c66b9214606076b1dd141dda1605a1531c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 00:36:35.825992 containerd[1570]: time="2026-04-24T00:36:35.825767251Z" level=info msg="CreateContainer within sandbox \"335a8f8215bb6ad8724ec2a6b57399a66de39b9cbbe35a30cae686dc7b0fd3af\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1a280f85f11c0e3cdcda41d767bc3d927bad254155329620bc33a2a0a495cced\"" Apr 24 00:36:35.827582 containerd[1570]: time="2026-04-24T00:36:35.827482452Z" level=info msg="StartContainer for \"1a280f85f11c0e3cdcda41d767bc3d927bad254155329620bc33a2a0a495cced\"" Apr 24 00:36:35.836101 containerd[1570]: time="2026-04-24T00:36:35.835973947Z" level=info msg="connecting to shim 1a280f85f11c0e3cdcda41d767bc3d927bad254155329620bc33a2a0a495cced" address="unix:///run/containerd/s/10e60273798b5b1fcc394378655d372dcd5b9f88d34a76c73e05afbade02110e" protocol=ttrpc version=3 Apr 24 00:36:35.842789 containerd[1570]: time="2026-04-24T00:36:35.839450619Z" level=info msg="CreateContainer within sandbox \"c57ceee4282bd3e942797bae4868969cef2d4e88770c579d035321d2d7089226\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 00:36:35.857824 containerd[1570]: time="2026-04-24T00:36:35.857773569Z" level=info msg="Container 1bf5dadc87935e4f2907574421ef14bef7bd9f397729877b3d9018f06216ba20: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:36:35.866265 containerd[1570]: time="2026-04-24T00:36:35.865689769Z" level=info msg="Container 14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:36:35.875524 systemd[1]: Started cri-containerd-1a280f85f11c0e3cdcda41d767bc3d927bad254155329620bc33a2a0a495cced.scope - libcontainer container 1a280f85f11c0e3cdcda41d767bc3d927bad254155329620bc33a2a0a495cced. Apr 24 00:36:35.880571 containerd[1570]: time="2026-04-24T00:36:35.880286583Z" level=info msg="CreateContainer within sandbox \"e71f5b7eff86de8ad8b88e29f4efc3c66b9214606076b1dd141dda1605a1531c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1bf5dadc87935e4f2907574421ef14bef7bd9f397729877b3d9018f06216ba20\"" Apr 24 00:36:35.882259 containerd[1570]: time="2026-04-24T00:36:35.882241940Z" level=info msg="StartContainer for \"1bf5dadc87935e4f2907574421ef14bef7bd9f397729877b3d9018f06216ba20\"" Apr 24 00:36:35.883506 containerd[1570]: time="2026-04-24T00:36:35.883486133Z" level=info msg="connecting to shim 1bf5dadc87935e4f2907574421ef14bef7bd9f397729877b3d9018f06216ba20" address="unix:///run/containerd/s/93303d79d184d6322139230b27f1755e07a674cf3d2bcb8863ccfe055bf9ef18" protocol=ttrpc version=3 Apr 24 00:36:35.894328 containerd[1570]: time="2026-04-24T00:36:35.894225390Z" level=info msg="CreateContainer within sandbox \"c57ceee4282bd3e942797bae4868969cef2d4e88770c579d035321d2d7089226\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d\"" Apr 24 00:36:35.902366 containerd[1570]: time="2026-04-24T00:36:35.902176411Z" level=info msg="StartContainer for \"14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d\"" Apr 24 00:36:35.907581 containerd[1570]: time="2026-04-24T00:36:35.907457566Z" level=info msg="connecting to shim 14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d" address="unix:///run/containerd/s/97bc58408e81eadf2df47d3b43452fc839e7c18c7a9fd77c41fbc64fb6f7a86e" protocol=ttrpc version=3 Apr 24 00:36:35.927849 systemd[1]: Started cri-containerd-1bf5dadc87935e4f2907574421ef14bef7bd9f397729877b3d9018f06216ba20.scope - libcontainer container 1bf5dadc87935e4f2907574421ef14bef7bd9f397729877b3d9018f06216ba20. Apr 24 00:36:35.995607 systemd[1]: Started cri-containerd-14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d.scope - libcontainer container 14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d. Apr 24 00:36:36.128563 containerd[1570]: time="2026-04-24T00:36:36.128460767Z" level=info msg="StartContainer for \"1a280f85f11c0e3cdcda41d767bc3d927bad254155329620bc33a2a0a495cced\" returns successfully" Apr 24 00:36:36.159485 kubelet[2458]: E0424 00:36:36.158695 2458 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="3.2s" Apr 24 00:36:36.184380 containerd[1570]: time="2026-04-24T00:36:36.184282266Z" level=info msg="StartContainer for \"14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d\" returns successfully" Apr 24 00:36:36.189833 containerd[1570]: time="2026-04-24T00:36:36.189741374Z" level=info msg="StartContainer for \"1bf5dadc87935e4f2907574421ef14bef7bd9f397729877b3d9018f06216ba20\" returns successfully" Apr 24 00:36:36.594421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1314456465.mount: Deactivated successfully. Apr 24 00:36:36.617190 kubelet[2458]: I0424 00:36:36.610796 2458 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 00:36:36.617190 kubelet[2458]: E0424 00:36:36.611413 2458 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Apr 24 00:36:36.642113 kubelet[2458]: E0424 00:36:36.642066 2458 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:36:36.643245 kubelet[2458]: E0424 00:36:36.643186 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:36.654982 kubelet[2458]: E0424 00:36:36.648807 2458 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:36:36.654982 kubelet[2458]: E0424 00:36:36.649008 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:36.681388 kubelet[2458]: E0424 00:36:36.681295 2458 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:36:36.682207 kubelet[2458]: E0424 00:36:36.681602 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:37.673950 kubelet[2458]: E0424 00:36:37.668695 2458 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:36:37.673950 kubelet[2458]: E0424 00:36:37.669220 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:37.673950 kubelet[2458]: E0424 00:36:37.669774 2458 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:36:37.673950 kubelet[2458]: E0424 00:36:37.670065 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:38.677507 kubelet[2458]: E0424 00:36:38.677277 2458 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:36:38.677507 kubelet[2458]: E0424 00:36:38.677476 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:39.850182 kubelet[2458]: I0424 00:36:39.850100 2458 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 00:36:41.391018 kubelet[2458]: E0424 00:36:41.390550 2458 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 24 00:36:41.469367 kubelet[2458]: I0424 00:36:41.468793 2458 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 24 00:36:41.525131 kubelet[2458]: I0424 00:36:41.524911 2458 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 00:36:41.592919 kubelet[2458]: E0424 00:36:41.592635 2458 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 24 00:36:41.592919 kubelet[2458]: I0424 00:36:41.592661 2458 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 00:36:41.597821 kubelet[2458]: E0424 00:36:41.597627 2458 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 24 00:36:41.598598 kubelet[2458]: I0424 00:36:41.598543 2458 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 00:36:41.603329 kubelet[2458]: E0424 00:36:41.603249 2458 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 24 00:36:42.086554 kubelet[2458]: I0424 00:36:42.086408 2458 apiserver.go:52] "Watching apiserver" Apr 24 00:36:42.123851 kubelet[2458]: I0424 00:36:42.123758 2458 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 00:36:44.116781 kubelet[2458]: I0424 00:36:44.115177 2458 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 00:36:44.177949 kubelet[2458]: E0424 00:36:44.177710 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:44.821536 kubelet[2458]: E0424 00:36:44.821422 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:45.793032 kubelet[2458]: I0424 00:36:45.791974 2458 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 00:36:45.823977 kubelet[2458]: E0424 00:36:45.823332 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:46.902967 kubelet[2458]: E0424 00:36:46.902482 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:46.920314 systemd[1]: Reload requested from client PID 2755 ('systemctl') (unit session-9.scope)... Apr 24 00:36:46.920336 systemd[1]: Reloading... Apr 24 00:36:47.390208 zram_generator::config[2801]: No configuration found. Apr 24 00:36:47.874643 kubelet[2458]: I0424 00:36:47.874303 2458 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 00:36:47.979004 kubelet[2458]: E0424 00:36:47.978916 2458 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:48.046349 kubelet[2458]: I0424 00:36:48.046268 2458 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.046224216 podStartE2EDuration="3.046224216s" podCreationTimestamp="2026-04-24 00:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:36:48.019557688 +0000 UTC m=+15.765373630" watchObservedRunningTime="2026-04-24 00:36:48.046224216 +0000 UTC m=+15.792040138" Apr 24 00:36:48.077282 systemd[1]: Reloading finished in 1156 ms. Apr 24 00:36:48.176542 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:36:48.187237 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 00:36:48.187547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:36:48.187618 systemd[1]: kubelet.service: Consumed 5.178s CPU time, 128.6M memory peak. Apr 24 00:36:48.200725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:36:48.505394 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:36:48.526591 (kubelet)[2842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 00:36:48.770238 kubelet[2842]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 00:36:48.805125 kubelet[2842]: I0424 00:36:48.804459 2842 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 24 00:36:48.807465 kubelet[2842]: I0424 00:36:48.806682 2842 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 00:36:48.807465 kubelet[2842]: I0424 00:36:48.806739 2842 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 00:36:48.807465 kubelet[2842]: I0424 00:36:48.806743 2842 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 00:36:48.807465 kubelet[2842]: I0424 00:36:48.807330 2842 server.go:951] "Client rotation is on, will bootstrap in background" Apr 24 00:36:48.809280 kubelet[2842]: I0424 00:36:48.808951 2842 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 00:36:48.813986 kubelet[2842]: I0424 00:36:48.813117 2842 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 00:36:48.892693 kubelet[2842]: I0424 00:36:48.892561 2842 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 24 00:36:48.907279 kubelet[2842]: I0424 00:36:48.906669 2842 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 00:36:48.908465 kubelet[2842]: I0424 00:36:48.908325 2842 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 00:36:48.909687 kubelet[2842]: I0424 00:36:48.908389 2842 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 00:36:48.909687 kubelet[2842]: I0424 00:36:48.908553 2842 topology_manager.go:143] "Creating topology manager with none policy" Apr 24 00:36:48.909687 kubelet[2842]: I0424 00:36:48.908559 2842 container_manager_linux.go:308] "Creating device plugin manager" Apr 24 00:36:48.909687 kubelet[2842]: I0424 00:36:48.908585 2842 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 00:36:48.909687 kubelet[2842]: I0424 00:36:48.909693 2842 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 24 00:36:48.912798 kubelet[2842]: I0424 00:36:48.912641 2842 kubelet.go:482] "Attempting to sync node with API server" Apr 24 00:36:48.912798 kubelet[2842]: I0424 00:36:48.912660 2842 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 00:36:48.912798 kubelet[2842]: I0424 00:36:48.912726 2842 kubelet.go:394] "Adding apiserver pod source" Apr 24 00:36:48.912798 kubelet[2842]: I0424 00:36:48.912735 2842 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 00:36:48.920013 kubelet[2842]: I0424 00:36:48.919816 2842 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 24 00:36:48.928146 kubelet[2842]: I0424 00:36:48.927048 2842 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 00:36:48.928146 kubelet[2842]: I0424 00:36:48.927138 2842 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 00:36:48.960316 kubelet[2842]: I0424 00:36:48.959322 2842 server.go:1257] "Started kubelet" Apr 24 00:36:48.966716 kubelet[2842]: I0424 00:36:48.962799 2842 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 00:36:48.966716 kubelet[2842]: I0424 00:36:48.966509 2842 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 00:36:48.967468 kubelet[2842]: I0424 00:36:48.967450 2842 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 00:36:48.971958 kubelet[2842]: I0424 00:36:48.969323 2842 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 00:36:48.971958 kubelet[2842]: I0424 00:36:48.969442 2842 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 24 00:36:48.973241 kubelet[2842]: I0424 00:36:48.972255 2842 server.go:317] "Adding debug handlers to kubelet server" Apr 24 00:36:48.978829 kubelet[2842]: I0424 00:36:48.977785 2842 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 24 00:36:48.981039 kubelet[2842]: I0424 00:36:48.979917 2842 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 00:36:48.987400 kubelet[2842]: E0424 00:36:48.987193 2842 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 00:36:48.987400 kubelet[2842]: I0424 00:36:48.987480 2842 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 00:36:48.988225 kubelet[2842]: I0424 00:36:48.987618 2842 reconciler.go:29] "Reconciler: start to sync state" Apr 24 00:36:48.998163 kubelet[2842]: I0424 00:36:48.998028 2842 factory.go:223] Registration of the systemd container factory successfully Apr 24 00:36:48.999575 kubelet[2842]: I0424 00:36:48.999429 2842 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 00:36:49.027773 kubelet[2842]: I0424 00:36:49.024343 2842 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 00:36:49.088362 kubelet[2842]: I0424 00:36:49.088276 2842 factory.go:223] Registration of the containerd container factory successfully Apr 24 00:36:49.109222 kubelet[2842]: E0424 00:36:49.107113 2842 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 00:36:49.122459 kubelet[2842]: I0424 00:36:49.122386 2842 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 00:36:49.122459 kubelet[2842]: I0424 00:36:49.122464 2842 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 24 00:36:49.122789 kubelet[2842]: I0424 00:36:49.122559 2842 kubelet.go:2501] "Starting kubelet main sync loop" Apr 24 00:36:49.122789 kubelet[2842]: E0424 00:36:49.122627 2842 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 00:36:49.223536 kubelet[2842]: E0424 00:36:49.223442 2842 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 24 00:36:49.427596 kubelet[2842]: E0424 00:36:49.427038 2842 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 24 00:36:49.431292 kubelet[2842]: I0424 00:36:49.431205 2842 cpu_manager.go:225] "Starting" policy="none" Apr 24 00:36:49.431292 kubelet[2842]: I0424 00:36:49.431253 2842 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 24 00:36:49.431292 kubelet[2842]: I0424 00:36:49.431286 2842 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 24 00:36:49.431579 kubelet[2842]: I0424 00:36:49.431568 2842 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 24 00:36:49.431615 kubelet[2842]: I0424 00:36:49.431579 2842 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 24 00:36:49.431615 kubelet[2842]: I0424 00:36:49.431593 2842 policy_none.go:50] "Start" Apr 24 00:36:49.431615 kubelet[2842]: I0424 00:36:49.431600 2842 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 00:36:49.431615 kubelet[2842]: I0424 00:36:49.431607 2842 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 00:36:49.431786 kubelet[2842]: I0424 00:36:49.431737 2842 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 24 00:36:49.431786 kubelet[2842]: I0424 00:36:49.431776 2842 policy_none.go:44] "Start" Apr 24 00:36:49.462009 kubelet[2842]: E0424 00:36:49.461954 2842 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 00:36:49.462310 kubelet[2842]: I0424 00:36:49.462260 2842 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 24 00:36:49.462378 kubelet[2842]: I0424 00:36:49.462308 2842 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 00:36:49.464330 kubelet[2842]: I0424 00:36:49.463024 2842 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 24 00:36:49.467769 kubelet[2842]: E0424 00:36:49.467683 2842 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 00:36:49.807670 kubelet[2842]: I0424 00:36:49.807540 2842 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 00:36:49.863776 kubelet[2842]: I0424 00:36:49.863656 2842 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 00:36:49.869502 kubelet[2842]: I0424 00:36:49.869421 2842 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 00:36:49.871199 kubelet[2842]: I0424 00:36:49.869850 2842 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 00:36:49.904023 kubelet[2842]: E0424 00:36:49.903771 2842 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 24 00:36:49.904454 kubelet[2842]: E0424 00:36:49.904416 2842 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 24 00:36:49.915668 kubelet[2842]: I0424 00:36:49.914718 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 24 00:36:49.915668 kubelet[2842]: I0424 00:36:49.914802 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:36:49.916676 kubelet[2842]: I0424 00:36:49.915005 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:36:49.916676 kubelet[2842]: I0424 00:36:49.915953 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:36:49.916676 kubelet[2842]: I0424 00:36:49.916055 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:36:49.919539 kubelet[2842]: I0424 00:36:49.919479 2842 apiserver.go:52] "Watching apiserver" Apr 24 00:36:49.919808 kubelet[2842]: I0424 00:36:49.919761 2842 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 24 00:36:49.919836 kubelet[2842]: I0424 00:36:49.919828 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06a2b53232f567f4e60f1070c7a6500d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06a2b53232f567f4e60f1070c7a6500d\") " pod="kube-system/kube-apiserver-localhost" Apr 24 00:36:49.919953 kubelet[2842]: I0424 00:36:49.919846 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06a2b53232f567f4e60f1070c7a6500d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06a2b53232f567f4e60f1070c7a6500d\") " pod="kube-system/kube-apiserver-localhost" Apr 24 00:36:49.919981 kubelet[2842]: I0424 00:36:49.919976 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06a2b53232f567f4e60f1070c7a6500d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06a2b53232f567f4e60f1070c7a6500d\") " pod="kube-system/kube-apiserver-localhost" Apr 24 00:36:49.922558 kubelet[2842]: I0424 00:36:49.922414 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:36:49.924583 kubelet[2842]: I0424 00:36:49.924467 2842 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 24 00:36:49.938791 kubelet[2842]: E0424 00:36:49.938646 2842 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 24 00:36:49.992317 kubelet[2842]: I0424 00:36:49.992105 2842 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 00:36:50.206326 kubelet[2842]: E0424 00:36:50.205822 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:50.206326 kubelet[2842]: E0424 00:36:50.206158 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:50.275686 kubelet[2842]: E0424 00:36:50.274119 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:50.309805 kubelet[2842]: E0424 00:36:50.309645 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:50.334639 kubelet[2842]: E0424 00:36:50.334544 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:50.421672 kubelet[2842]: I0424 00:36:50.421522 2842 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.421470677 podStartE2EDuration="3.421470677s" podCreationTimestamp="2026-04-24 00:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:36:50.42118864 +0000 UTC m=+1.808659102" watchObservedRunningTime="2026-04-24 00:36:50.421470677 +0000 UTC m=+1.808941144" Apr 24 00:36:51.348474 kubelet[2842]: E0424 00:36:51.348197 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:51.393213 kubelet[2842]: E0424 00:36:51.392471 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:51.827747 kubelet[2842]: I0424 00:36:51.827546 2842 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 00:36:51.886221 containerd[1570]: time="2026-04-24T00:36:51.886120799Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 00:36:51.892610 kubelet[2842]: I0424 00:36:51.892493 2842 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 00:36:52.411014 kubelet[2842]: E0424 00:36:52.409923 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:53.520592 systemd[1]: Created slice kubepods-besteffort-pod2d0a091e_a52c_4399_935f_66ae0160a5c3.slice - libcontainer container kubepods-besteffort-pod2d0a091e_a52c_4399_935f_66ae0160a5c3.slice. Apr 24 00:36:53.531196 kubelet[2842]: I0424 00:36:53.530973 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d0a091e-a52c-4399-935f-66ae0160a5c3-lib-modules\") pod \"kube-proxy-mc5t8\" (UID: \"2d0a091e-a52c-4399-935f-66ae0160a5c3\") " pod="kube-system/kube-proxy-mc5t8" Apr 24 00:36:53.531196 kubelet[2842]: I0424 00:36:53.531150 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87tww\" (UniqueName: \"kubernetes.io/projected/2d0a091e-a52c-4399-935f-66ae0160a5c3-kube-api-access-87tww\") pod \"kube-proxy-mc5t8\" (UID: \"2d0a091e-a52c-4399-935f-66ae0160a5c3\") " pod="kube-system/kube-proxy-mc5t8" Apr 24 00:36:53.531698 kubelet[2842]: I0424 00:36:53.531167 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d0a091e-a52c-4399-935f-66ae0160a5c3-kube-proxy\") pod \"kube-proxy-mc5t8\" (UID: \"2d0a091e-a52c-4399-935f-66ae0160a5c3\") " pod="kube-system/kube-proxy-mc5t8" Apr 24 00:36:53.531698 kubelet[2842]: I0424 00:36:53.531628 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d0a091e-a52c-4399-935f-66ae0160a5c3-xtables-lock\") pod \"kube-proxy-mc5t8\" (UID: \"2d0a091e-a52c-4399-935f-66ae0160a5c3\") " pod="kube-system/kube-proxy-mc5t8" Apr 24 00:36:53.843638 kubelet[2842]: E0424 00:36:53.843484 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:53.846001 containerd[1570]: time="2026-04-24T00:36:53.845752728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mc5t8,Uid:2d0a091e-a52c-4399-935f-66ae0160a5c3,Namespace:kube-system,Attempt:0,}" Apr 24 00:36:53.990644 containerd[1570]: time="2026-04-24T00:36:53.990463261Z" level=info msg="connecting to shim 257a5b7d6ca0d79e851cb767825a5cc2f6ebc87187a88c271b8129d966d28da4" address="unix:///run/containerd/s/ee6a3be5965a84b82001f29bfcbe88d94355a63109d8867cb2d81a0e02e43495" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:36:54.445396 systemd[1]: Started cri-containerd-257a5b7d6ca0d79e851cb767825a5cc2f6ebc87187a88c271b8129d966d28da4.scope - libcontainer container 257a5b7d6ca0d79e851cb767825a5cc2f6ebc87187a88c271b8129d966d28da4. Apr 24 00:36:54.653575 containerd[1570]: time="2026-04-24T00:36:54.653325259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mc5t8,Uid:2d0a091e-a52c-4399-935f-66ae0160a5c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"257a5b7d6ca0d79e851cb767825a5cc2f6ebc87187a88c271b8129d966d28da4\"" Apr 24 00:36:54.676037 kubelet[2842]: E0424 00:36:54.675739 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:54.709138 containerd[1570]: time="2026-04-24T00:36:54.707525891Z" level=info msg="CreateContainer within sandbox \"257a5b7d6ca0d79e851cb767825a5cc2f6ebc87187a88c271b8129d966d28da4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 00:36:54.740010 containerd[1570]: time="2026-04-24T00:36:54.739844191Z" level=info msg="Container d2cf51532e15854fdc427d18a439a3f6fcb16e8a542a149987bf7c8b346ca7da: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:36:54.762002 containerd[1570]: time="2026-04-24T00:36:54.761414011Z" level=info msg="CreateContainer within sandbox \"257a5b7d6ca0d79e851cb767825a5cc2f6ebc87187a88c271b8129d966d28da4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d2cf51532e15854fdc427d18a439a3f6fcb16e8a542a149987bf7c8b346ca7da\"" Apr 24 00:36:54.763397 containerd[1570]: time="2026-04-24T00:36:54.763305795Z" level=info msg="StartContainer for \"d2cf51532e15854fdc427d18a439a3f6fcb16e8a542a149987bf7c8b346ca7da\"" Apr 24 00:36:54.768049 containerd[1570]: time="2026-04-24T00:36:54.767793122Z" level=info msg="connecting to shim d2cf51532e15854fdc427d18a439a3f6fcb16e8a542a149987bf7c8b346ca7da" address="unix:///run/containerd/s/ee6a3be5965a84b82001f29bfcbe88d94355a63109d8867cb2d81a0e02e43495" protocol=ttrpc version=3 Apr 24 00:36:54.914631 systemd[1]: Started cri-containerd-d2cf51532e15854fdc427d18a439a3f6fcb16e8a542a149987bf7c8b346ca7da.scope - libcontainer container d2cf51532e15854fdc427d18a439a3f6fcb16e8a542a149987bf7c8b346ca7da. Apr 24 00:36:55.041389 systemd[1]: Created slice kubepods-besteffort-pod651537bc_1228_465c_9bb2_38fb5d6f1ef1.slice - libcontainer container kubepods-besteffort-pod651537bc_1228_465c_9bb2_38fb5d6f1ef1.slice. Apr 24 00:36:55.189198 containerd[1570]: time="2026-04-24T00:36:55.189152584Z" level=info msg="StartContainer for \"d2cf51532e15854fdc427d18a439a3f6fcb16e8a542a149987bf7c8b346ca7da\" returns successfully" Apr 24 00:36:55.201195 kubelet[2842]: I0424 00:36:55.200971 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/651537bc-1228-465c-9bb2-38fb5d6f1ef1-var-lib-calico\") pod \"tigera-operator-687949b757-mcfdf\" (UID: \"651537bc-1228-465c-9bb2-38fb5d6f1ef1\") " pod="tigera-operator/tigera-operator-687949b757-mcfdf" Apr 24 00:36:55.202151 kubelet[2842]: I0424 00:36:55.201998 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjhbg\" (UniqueName: \"kubernetes.io/projected/651537bc-1228-465c-9bb2-38fb5d6f1ef1-kube-api-access-bjhbg\") pod \"tigera-operator-687949b757-mcfdf\" (UID: \"651537bc-1228-465c-9bb2-38fb5d6f1ef1\") " pod="tigera-operator/tigera-operator-687949b757-mcfdf" Apr 24 00:36:55.356301 containerd[1570]: time="2026-04-24T00:36:55.355613391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-687949b757-mcfdf,Uid:651537bc-1228-465c-9bb2-38fb5d6f1ef1,Namespace:tigera-operator,Attempt:0,}" Apr 24 00:36:55.422598 containerd[1570]: time="2026-04-24T00:36:55.422513177Z" level=info msg="connecting to shim 29c9b8e1e5ae9689ab8fdb416a7a8bf111eb06fcf801e46ef65e57a22323c201" address="unix:///run/containerd/s/95e659de8a4b4f412a261fa2fba5d6d18d60080905262246c92bd6465c1c712d" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:36:55.493808 kubelet[2842]: E0424 00:36:55.493627 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:36:55.627542 systemd[1]: Started cri-containerd-29c9b8e1e5ae9689ab8fdb416a7a8bf111eb06fcf801e46ef65e57a22323c201.scope - libcontainer container 29c9b8e1e5ae9689ab8fdb416a7a8bf111eb06fcf801e46ef65e57a22323c201. Apr 24 00:36:56.412244 containerd[1570]: time="2026-04-24T00:36:56.412049048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-687949b757-mcfdf,Uid:651537bc-1228-465c-9bb2-38fb5d6f1ef1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"29c9b8e1e5ae9689ab8fdb416a7a8bf111eb06fcf801e46ef65e57a22323c201\"" Apr 24 00:36:56.421948 containerd[1570]: time="2026-04-24T00:36:56.421588442Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\"" Apr 24 00:36:58.609487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3845381007.mount: Deactivated successfully. Apr 24 00:36:59.291226 kubelet[2842]: I0424 00:36:59.291158 2842 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-mc5t8" podStartSLOduration=6.291068401 podStartE2EDuration="6.291068401s" podCreationTimestamp="2026-04-24 00:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:36:55.574706556 +0000 UTC m=+6.962177040" watchObservedRunningTime="2026-04-24 00:36:59.291068401 +0000 UTC m=+10.678538871" Apr 24 00:37:03.998014 containerd[1570]: time="2026-04-24T00:37:03.997827183Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:37:03.999735 containerd[1570]: time="2026-04-24T00:37:03.999685585Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.8: active requests=0, bytes read=41007543" Apr 24 00:37:04.004986 containerd[1570]: time="2026-04-24T00:37:04.004721181Z" level=info msg="ImageCreate event name:\"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:37:04.014758 containerd[1570]: time="2026-04-24T00:37:04.014653588Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:37:04.015531 containerd[1570]: time="2026-04-24T00:37:04.015359905Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.8\" with image id \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\", repo tag \"quay.io/tigera/operator:v1.40.8\", repo digest \"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\", size \"41003538\" in 7.593668097s" Apr 24 00:37:04.015531 containerd[1570]: time="2026-04-24T00:37:04.015409261Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\" returns image reference \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\"" Apr 24 00:37:04.113646 containerd[1570]: time="2026-04-24T00:37:04.113519229Z" level=info msg="CreateContainer within sandbox \"29c9b8e1e5ae9689ab8fdb416a7a8bf111eb06fcf801e46ef65e57a22323c201\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 00:37:04.160354 containerd[1570]: time="2026-04-24T00:37:04.160152054Z" level=info msg="Container afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:37:04.191082 containerd[1570]: time="2026-04-24T00:37:04.190845352Z" level=info msg="CreateContainer within sandbox \"29c9b8e1e5ae9689ab8fdb416a7a8bf111eb06fcf801e46ef65e57a22323c201\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd\"" Apr 24 00:37:04.194169 containerd[1570]: time="2026-04-24T00:37:04.194086948Z" level=info msg="StartContainer for \"afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd\"" Apr 24 00:37:04.199001 containerd[1570]: time="2026-04-24T00:37:04.198499395Z" level=info msg="connecting to shim afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd" address="unix:///run/containerd/s/95e659de8a4b4f412a261fa2fba5d6d18d60080905262246c92bd6465c1c712d" protocol=ttrpc version=3 Apr 24 00:37:04.274301 systemd[1]: Started cri-containerd-afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd.scope - libcontainer container afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd. Apr 24 00:37:04.718994 containerd[1570]: time="2026-04-24T00:37:04.718796989Z" level=info msg="StartContainer for \"afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd\" returns successfully" Apr 24 00:37:16.398508 sudo[1796]: pam_unix(sudo:session): session closed for user root Apr 24 00:37:16.401082 sshd[1795]: Connection closed by 10.0.0.1 port 44096 Apr 24 00:37:16.404955 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Apr 24 00:37:16.422480 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:44096.service: Deactivated successfully. Apr 24 00:37:16.430284 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 00:37:16.431285 systemd[1]: session-9.scope: Consumed 24.505s CPU time, 234.8M memory peak. Apr 24 00:37:16.435623 systemd-logind[1558]: Session 9 logged out. Waiting for processes to exit. Apr 24 00:37:16.439501 systemd-logind[1558]: Removed session 9. Apr 24 00:37:21.495820 kubelet[2842]: I0424 00:37:21.495714 2842 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-687949b757-mcfdf" podStartSLOduration=19.896567122 podStartE2EDuration="27.4956245s" podCreationTimestamp="2026-04-24 00:36:54 +0000 UTC" firstStartedPulling="2026-04-24 00:36:56.419734256 +0000 UTC m=+7.807204713" lastFinishedPulling="2026-04-24 00:37:04.018791626 +0000 UTC m=+15.406262091" observedRunningTime="2026-04-24 00:37:05.064806117 +0000 UTC m=+16.452276597" watchObservedRunningTime="2026-04-24 00:37:21.4956245 +0000 UTC m=+32.883095058" Apr 24 00:37:21.558797 systemd[1]: Created slice kubepods-besteffort-pod9d093a71_5338_4415_aa14_78b6d7df8fbf.slice - libcontainer container kubepods-besteffort-pod9d093a71_5338_4415_aa14_78b6d7df8fbf.slice. Apr 24 00:37:21.680839 kubelet[2842]: I0424 00:37:21.680683 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnbz8\" (UniqueName: \"kubernetes.io/projected/9d093a71-5338-4415-aa14-78b6d7df8fbf-kube-api-access-bnbz8\") pod \"calico-typha-8db759cbc-2dt49\" (UID: \"9d093a71-5338-4415-aa14-78b6d7df8fbf\") " pod="calico-system/calico-typha-8db759cbc-2dt49" Apr 24 00:37:21.683546 kubelet[2842]: I0424 00:37:21.683485 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d093a71-5338-4415-aa14-78b6d7df8fbf-tigera-ca-bundle\") pod \"calico-typha-8db759cbc-2dt49\" (UID: \"9d093a71-5338-4415-aa14-78b6d7df8fbf\") " pod="calico-system/calico-typha-8db759cbc-2dt49" Apr 24 00:37:21.683755 kubelet[2842]: I0424 00:37:21.683743 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9d093a71-5338-4415-aa14-78b6d7df8fbf-typha-certs\") pod \"calico-typha-8db759cbc-2dt49\" (UID: \"9d093a71-5338-4415-aa14-78b6d7df8fbf\") " pod="calico-system/calico-typha-8db759cbc-2dt49" Apr 24 00:37:22.211178 kubelet[2842]: E0424 00:37:22.209503 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:37:22.263385 containerd[1570]: time="2026-04-24T00:37:22.228303854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8db759cbc-2dt49,Uid:9d093a71-5338-4415-aa14-78b6d7df8fbf,Namespace:calico-system,Attempt:0,}" Apr 24 00:37:22.596327 containerd[1570]: time="2026-04-24T00:37:22.593749407Z" level=info msg="connecting to shim 07b48ba5c45cba73c5e85d185e53b3435d44008796c7c8ec3f29ccda0aee5f89" address="unix:///run/containerd/s/8965185261d0c7228afda094bdd8ae198a84a3c5a070ebbd40802b272fd670b0" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:37:23.028196 systemd[1]: Started cri-containerd-07b48ba5c45cba73c5e85d185e53b3435d44008796c7c8ec3f29ccda0aee5f89.scope - libcontainer container 07b48ba5c45cba73c5e85d185e53b3435d44008796c7c8ec3f29ccda0aee5f89. Apr 24 00:37:23.127233 systemd[1]: Created slice kubepods-besteffort-podd1b23bf3_e49e_4c90_a9d7_196b32f35107.slice - libcontainer container kubepods-besteffort-podd1b23bf3_e49e_4c90_a9d7_196b32f35107.slice. Apr 24 00:37:23.235209 kubelet[2842]: I0424 00:37:23.235015 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1b23bf3-e49e-4c90-a9d7-196b32f35107-tigera-ca-bundle\") pod \"calico-node-249n5\" (UID: \"d1b23bf3-e49e-4c90-a9d7-196b32f35107\") " pod="calico-system/calico-node-249n5" Apr 24 00:37:23.235209 kubelet[2842]: I0424 00:37:23.235097 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1b23bf3-e49e-4c90-a9d7-196b32f35107-xtables-lock\") pod \"calico-node-249n5\" (UID: \"d1b23bf3-e49e-4c90-a9d7-196b32f35107\") " pod="calico-system/calico-node-249n5" Apr 24 00:37:23.235651 kubelet[2842]: I0424 00:37:23.235297 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d1b23bf3-e49e-4c90-a9d7-196b32f35107-node-certs\") pod \"calico-node-249n5\" (UID: \"d1b23bf3-e49e-4c90-a9d7-196b32f35107\") " pod="calico-system/calico-node-249n5" Apr 24 00:37:23.235651 kubelet[2842]: I0424 00:37:23.235313 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/d1b23bf3-e49e-4c90-a9d7-196b32f35107-bpffs\") pod \"calico-node-249n5\" (UID: \"d1b23bf3-e49e-4c90-a9d7-196b32f35107\") " pod="calico-system/calico-node-249n5" Apr 24 00:37:23.235651 kubelet[2842]: I0424 00:37:23.235326 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1b23bf3-e49e-4c90-a9d7-196b32f35107-lib-modules\") pod \"calico-node-249n5\" (UID: \"d1b23bf3-e49e-4c90-a9d7-196b32f35107\") " pod="calico-system/calico-node-249n5" Apr 24 00:37:23.235651 kubelet[2842]: I0424 00:37:23.235384 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/d1b23bf3-e49e-4c90-a9d7-196b32f35107-nodeproc\") pod \"calico-node-249n5\" (UID: \"d1b23bf3-e49e-4c90-a9d7-196b32f35107\") " pod="calico-system/calico-node-249n5" Apr 24 00:37:23.235651 kubelet[2842]: I0424 00:37:23.235420 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgr9f\" (UniqueName: \"kubernetes.io/projected/d1b23bf3-e49e-4c90-a9d7-196b32f35107-kube-api-access-vgr9f\") pod \"calico-node-249n5\" (UID: \"d1b23bf3-e49e-4c90-a9d7-196b32f35107\") " pod="calico-system/calico-node-249n5" Apr 24 00:37:23.235758 kubelet[2842]: I0424 00:37:23.235433 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d1b23bf3-e49e-4c90-a9d7-196b32f35107-cni-net-dir\") pod \"calico-node-249n5\" (UID: \"d1b23bf3-e49e-4c90-a9d7-196b32f35107\") " pod="calico-system/calico-node-249n5" Apr 24 00:37:23.235758 kubelet[2842]: I0424 00:37:23.235446 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d1b23bf3-e49e-4c90-a9d7-196b32f35107-policysync\") pod \"calico-node-249n5\" (UID: \"d1b23bf3-e49e-4c90-a9d7-196b32f35107\") " pod="calico-system/calico-node-249n5" Apr 24 00:37:23.235758 kubelet[2842]: I0424 00:37:23.235475 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d1b23bf3-e49e-4c90-a9d7-196b32f35107-cni-log-dir\") pod \"calico-node-249n5\" (UID: \"d1b23bf3-e49e-4c90-a9d7-196b32f35107\") " pod="calico-system/calico-node-249n5" Apr 24 00:37:23.235758 kubelet[2842]: I0424 00:37:23.235485 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d1b23bf3-e49e-4c90-a9d7-196b32f35107-var-run-calico\") pod \"calico-node-249n5\" (UID: \"d1b23bf3-e49e-4c90-a9d7-196b32f35107\") " pod="calico-system/calico-node-249n5" Apr 24 00:37:23.235758 kubelet[2842]: I0424 00:37:23.235504 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d1b23bf3-e49e-4c90-a9d7-196b32f35107-cni-bin-dir\") pod \"calico-node-249n5\" (UID: \"d1b23bf3-e49e-4c90-a9d7-196b32f35107\") " pod="calico-system/calico-node-249n5" Apr 24 00:37:23.236771 kubelet[2842]: I0424 00:37:23.235540 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d1b23bf3-e49e-4c90-a9d7-196b32f35107-flexvol-driver-host\") pod \"calico-node-249n5\" (UID: \"d1b23bf3-e49e-4c90-a9d7-196b32f35107\") " pod="calico-system/calico-node-249n5" Apr 24 00:37:23.236771 kubelet[2842]: I0424 00:37:23.235591 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/d1b23bf3-e49e-4c90-a9d7-196b32f35107-sys-fs\") pod \"calico-node-249n5\" (UID: \"d1b23bf3-e49e-4c90-a9d7-196b32f35107\") " pod="calico-system/calico-node-249n5" Apr 24 00:37:23.239030 kubelet[2842]: I0424 00:37:23.235603 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d1b23bf3-e49e-4c90-a9d7-196b32f35107-var-lib-calico\") pod \"calico-node-249n5\" (UID: \"d1b23bf3-e49e-4c90-a9d7-196b32f35107\") " pod="calico-system/calico-node-249n5" Apr 24 00:37:23.410488 kubelet[2842]: E0424 00:37:23.410363 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.410988 kubelet[2842]: W0424 00:37:23.410806 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.411113 kubelet[2842]: E0424 00:37:23.411076 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.411929 kubelet[2842]: E0424 00:37:23.411778 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.411929 kubelet[2842]: W0424 00:37:23.411789 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.411929 kubelet[2842]: E0424 00:37:23.411798 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.412631 kubelet[2842]: E0424 00:37:23.412620 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.415397 kubelet[2842]: W0424 00:37:23.414575 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.415674 kubelet[2842]: E0424 00:37:23.415652 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.416549 kubelet[2842]: E0424 00:37:23.416539 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.416608 kubelet[2842]: W0424 00:37:23.416600 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.416711 kubelet[2842]: E0424 00:37:23.416703 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.417352 kubelet[2842]: E0424 00:37:23.417342 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.417408 kubelet[2842]: W0424 00:37:23.417401 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.417439 kubelet[2842]: E0424 00:37:23.417434 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.422665 kubelet[2842]: E0424 00:37:23.422270 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.422665 kubelet[2842]: W0424 00:37:23.422354 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.422665 kubelet[2842]: E0424 00:37:23.422410 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.433704 kubelet[2842]: E0424 00:37:23.433682 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.434004 kubelet[2842]: W0424 00:37:23.433992 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.434953 kubelet[2842]: E0424 00:37:23.434287 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.439363 kubelet[2842]: E0424 00:37:23.437038 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.439363 kubelet[2842]: W0424 00:37:23.437089 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.439363 kubelet[2842]: E0424 00:37:23.437107 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.439363 kubelet[2842]: E0424 00:37:23.437268 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.439363 kubelet[2842]: W0424 00:37:23.437274 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.439363 kubelet[2842]: E0424 00:37:23.437281 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.439363 kubelet[2842]: E0424 00:37:23.438675 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.443269 kubelet[2842]: W0424 00:37:23.438842 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.443671 kubelet[2842]: E0424 00:37:23.443558 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.445941 kubelet[2842]: E0424 00:37:23.445437 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.445941 kubelet[2842]: W0424 00:37:23.445535 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.445941 kubelet[2842]: E0424 00:37:23.445546 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.449562 kubelet[2842]: E0424 00:37:23.446086 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.453297 kubelet[2842]: W0424 00:37:23.448708 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.453362 kubelet[2842]: E0424 00:37:23.453305 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.454067 kubelet[2842]: E0424 00:37:23.453823 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.454067 kubelet[2842]: W0424 00:37:23.453969 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.454067 kubelet[2842]: E0424 00:37:23.453980 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.454387 kubelet[2842]: E0424 00:37:23.454321 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.454387 kubelet[2842]: W0424 00:37:23.454370 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.454387 kubelet[2842]: E0424 00:37:23.454381 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.455535 kubelet[2842]: E0424 00:37:23.455401 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.455535 kubelet[2842]: W0424 00:37:23.455562 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.456002 kubelet[2842]: E0424 00:37:23.455754 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.463642 kubelet[2842]: E0424 00:37:23.463240 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.463642 kubelet[2842]: W0424 00:37:23.463375 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.467963 kubelet[2842]: E0424 00:37:23.467267 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.469480 kubelet[2842]: E0424 00:37:23.469403 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.469480 kubelet[2842]: W0424 00:37:23.469462 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.469480 kubelet[2842]: E0424 00:37:23.469479 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.471739 kubelet[2842]: E0424 00:37:23.471481 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.473928 kubelet[2842]: W0424 00:37:23.473593 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.476321 kubelet[2842]: E0424 00:37:23.475695 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.481485 kubelet[2842]: E0424 00:37:23.481373 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.481624 kubelet[2842]: W0424 00:37:23.481504 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.481624 kubelet[2842]: E0424 00:37:23.481565 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.490242 kubelet[2842]: E0424 00:37:23.490074 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.490481 kubelet[2842]: W0424 00:37:23.490244 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.490481 kubelet[2842]: E0424 00:37:23.490304 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.492453 kubelet[2842]: E0424 00:37:23.492290 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.492706 kubelet[2842]: W0424 00:37:23.492455 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.492706 kubelet[2842]: E0424 00:37:23.492566 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.496260 kubelet[2842]: E0424 00:37:23.496194 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.496260 kubelet[2842]: W0424 00:37:23.496253 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.496365 kubelet[2842]: E0424 00:37:23.496269 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.496988 kubelet[2842]: E0424 00:37:23.496494 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.496988 kubelet[2842]: W0424 00:37:23.496557 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.496988 kubelet[2842]: E0424 00:37:23.496570 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.501527 kubelet[2842]: E0424 00:37:23.501413 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.501806 kubelet[2842]: W0424 00:37:23.501537 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.501806 kubelet[2842]: E0424 00:37:23.501699 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.503732 kubelet[2842]: E0424 00:37:23.503598 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.503732 kubelet[2842]: W0424 00:37:23.503660 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.503732 kubelet[2842]: E0424 00:37:23.503675 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.504732 kubelet[2842]: E0424 00:37:23.504529 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.504732 kubelet[2842]: W0424 00:37:23.504614 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.504732 kubelet[2842]: E0424 00:37:23.504631 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.506039 kubelet[2842]: E0424 00:37:23.505841 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.506039 kubelet[2842]: W0424 00:37:23.505994 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.506039 kubelet[2842]: E0424 00:37:23.506005 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.506683 kubelet[2842]: E0424 00:37:23.506584 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.506683 kubelet[2842]: W0424 00:37:23.506594 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.506683 kubelet[2842]: E0424 00:37:23.506609 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.508012 kubelet[2842]: E0424 00:37:23.507848 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.508012 kubelet[2842]: W0424 00:37:23.507989 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.508012 kubelet[2842]: E0424 00:37:23.507998 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.509050 kubelet[2842]: E0424 00:37:23.508812 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.509050 kubelet[2842]: W0424 00:37:23.508822 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.509050 kubelet[2842]: E0424 00:37:23.508831 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.514341 kubelet[2842]: E0424 00:37:23.513704 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.519006 kubelet[2842]: W0424 00:37:23.516600 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.519006 kubelet[2842]: E0424 00:37:23.518370 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.573095 kubelet[2842]: E0424 00:37:23.572678 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.575711 kubelet[2842]: W0424 00:37:23.573664 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.576086 kubelet[2842]: E0424 00:37:23.576023 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.585273 kubelet[2842]: E0424 00:37:23.585203 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.585273 kubelet[2842]: W0424 00:37:23.585260 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.585273 kubelet[2842]: E0424 00:37:23.585388 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.586338 kubelet[2842]: E0424 00:37:23.585765 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.586338 kubelet[2842]: W0424 00:37:23.585777 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.586338 kubelet[2842]: E0424 00:37:23.585788 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.589012 kubelet[2842]: E0424 00:37:23.587817 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.589012 kubelet[2842]: W0424 00:37:23.587959 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.589012 kubelet[2842]: E0424 00:37:23.588009 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.589012 kubelet[2842]: E0424 00:37:23.588721 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.589012 kubelet[2842]: W0424 00:37:23.588729 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.589012 kubelet[2842]: E0424 00:37:23.588794 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.593528 kubelet[2842]: E0424 00:37:23.592838 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.593705 kubelet[2842]: W0424 00:37:23.593583 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.593705 kubelet[2842]: E0424 00:37:23.593623 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.859527 kubelet[2842]: E0424 00:37:23.859267 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:23.859527 kubelet[2842]: W0424 00:37:23.859426 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:23.859527 kubelet[2842]: E0424 00:37:23.859478 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:23.962676 kubelet[2842]: E0424 00:37:23.960978 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:23.973651 containerd[1570]: time="2026-04-24T00:37:23.973296221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8db759cbc-2dt49,Uid:9d093a71-5338-4415-aa14-78b6d7df8fbf,Namespace:calico-system,Attempt:0,} returns sandbox id \"07b48ba5c45cba73c5e85d185e53b3435d44008796c7c8ec3f29ccda0aee5f89\"" Apr 24 00:37:24.007949 kubelet[2842]: E0424 00:37:24.003974 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:37:24.111349 kubelet[2842]: E0424 00:37:24.111013 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.112452 kubelet[2842]: W0424 00:37:24.111665 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.112452 kubelet[2842]: E0424 00:37:24.111700 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.112558 containerd[1570]: time="2026-04-24T00:37:24.111690004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\"" Apr 24 00:37:24.120013 kubelet[2842]: E0424 00:37:24.116520 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.126823 kubelet[2842]: W0424 00:37:24.126679 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.136024 kubelet[2842]: E0424 00:37:24.135465 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.144257 kubelet[2842]: E0424 00:37:24.144037 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.144257 kubelet[2842]: W0424 00:37:24.144079 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.150315 kubelet[2842]: E0424 00:37:24.146633 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.161410 kubelet[2842]: E0424 00:37:24.160805 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.161410 kubelet[2842]: W0424 00:37:24.160979 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.161410 kubelet[2842]: E0424 00:37:24.161200 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.163258 kubelet[2842]: E0424 00:37:24.163007 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.163258 kubelet[2842]: W0424 00:37:24.163062 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.163258 kubelet[2842]: E0424 00:37:24.163077 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.163694 kubelet[2842]: E0424 00:37:24.163434 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.163694 kubelet[2842]: W0424 00:37:24.163479 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.163694 kubelet[2842]: E0424 00:37:24.163488 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.167433 kubelet[2842]: E0424 00:37:24.167186 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.167433 kubelet[2842]: W0424 00:37:24.167199 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.167433 kubelet[2842]: E0424 00:37:24.167214 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.169069 kubelet[2842]: E0424 00:37:24.169055 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.169069 kubelet[2842]: W0424 00:37:24.169067 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.169619 kubelet[2842]: E0424 00:37:24.169078 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.178992 kubelet[2842]: E0424 00:37:24.178763 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.186989 kubelet[2842]: W0424 00:37:24.184437 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.188020 containerd[1570]: time="2026-04-24T00:37:24.187771873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-249n5,Uid:d1b23bf3-e49e-4c90-a9d7-196b32f35107,Namespace:calico-system,Attempt:0,}" Apr 24 00:37:24.188312 kubelet[2842]: E0424 00:37:24.188238 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.197557 kubelet[2842]: E0424 00:37:24.191729 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.197557 kubelet[2842]: W0424 00:37:24.195360 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.197557 kubelet[2842]: E0424 00:37:24.195517 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.202306 kubelet[2842]: E0424 00:37:24.200343 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.208964 kubelet[2842]: W0424 00:37:24.208178 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.209671 kubelet[2842]: E0424 00:37:24.209419 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.223001 kubelet[2842]: E0424 00:37:24.220828 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.241660 kubelet[2842]: W0424 00:37:24.228674 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.303670 kubelet[2842]: E0424 00:37:24.294724 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.303670 kubelet[2842]: E0424 00:37:24.299578 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.303670 kubelet[2842]: W0424 00:37:24.299617 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.303670 kubelet[2842]: E0424 00:37:24.299678 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.303670 kubelet[2842]: I0424 00:37:24.299792 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4753596c-a0a7-4611-9d92-e3a14065926b-registration-dir\") pod \"csi-node-driver-p4jvm\" (UID: \"4753596c-a0a7-4611-9d92-e3a14065926b\") " pod="calico-system/csi-node-driver-p4jvm" Apr 24 00:37:24.303670 kubelet[2842]: E0424 00:37:24.302097 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.303670 kubelet[2842]: W0424 00:37:24.302282 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.303670 kubelet[2842]: E0424 00:37:24.302340 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.304554 kubelet[2842]: E0424 00:37:24.303849 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.304554 kubelet[2842]: W0424 00:37:24.304003 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.304554 kubelet[2842]: E0424 00:37:24.304210 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.304554 kubelet[2842]: E0424 00:37:24.304406 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.304554 kubelet[2842]: W0424 00:37:24.304412 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.304554 kubelet[2842]: E0424 00:37:24.304420 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.304824 kubelet[2842]: E0424 00:37:24.304735 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.304824 kubelet[2842]: W0424 00:37:24.304787 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.304824 kubelet[2842]: E0424 00:37:24.304796 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.305382 kubelet[2842]: E0424 00:37:24.305314 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.305382 kubelet[2842]: W0424 00:37:24.305363 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.305382 kubelet[2842]: E0424 00:37:24.305373 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.306560 kubelet[2842]: I0424 00:37:24.306469 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4753596c-a0a7-4611-9d92-e3a14065926b-kubelet-dir\") pod \"csi-node-driver-p4jvm\" (UID: \"4753596c-a0a7-4611-9d92-e3a14065926b\") " pod="calico-system/csi-node-driver-p4jvm" Apr 24 00:37:24.308226 kubelet[2842]: E0424 00:37:24.307716 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.308226 kubelet[2842]: W0424 00:37:24.307813 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.313290 kubelet[2842]: E0424 00:37:24.313110 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.314037 kubelet[2842]: E0424 00:37:24.313834 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.314037 kubelet[2842]: W0424 00:37:24.314023 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.314090 kubelet[2842]: E0424 00:37:24.314042 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.314374 kubelet[2842]: E0424 00:37:24.314304 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.314374 kubelet[2842]: W0424 00:37:24.314370 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.314431 kubelet[2842]: E0424 00:37:24.314384 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.323078 kubelet[2842]: E0424 00:37:24.322437 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.328330 kubelet[2842]: W0424 00:37:24.328201 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.328602 kubelet[2842]: E0424 00:37:24.328397 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.346318 kubelet[2842]: E0424 00:37:24.346230 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.346318 kubelet[2842]: W0424 00:37:24.346294 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.346588 kubelet[2842]: E0424 00:37:24.346350 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.349456 kubelet[2842]: E0424 00:37:24.349345 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.349456 kubelet[2842]: W0424 00:37:24.349452 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.349587 kubelet[2842]: E0424 00:37:24.349472 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.349614 kubelet[2842]: I0424 00:37:24.349596 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4753596c-a0a7-4611-9d92-e3a14065926b-socket-dir\") pod \"csi-node-driver-p4jvm\" (UID: \"4753596c-a0a7-4611-9d92-e3a14065926b\") " pod="calico-system/csi-node-driver-p4jvm" Apr 24 00:37:24.355745 kubelet[2842]: E0424 00:37:24.355564 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.355745 kubelet[2842]: W0424 00:37:24.355701 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.358496 kubelet[2842]: E0424 00:37:24.358323 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.359103 kubelet[2842]: E0424 00:37:24.359036 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.359103 kubelet[2842]: W0424 00:37:24.359091 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.359212 kubelet[2842]: E0424 00:37:24.359176 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.360389 kubelet[2842]: E0424 00:37:24.360233 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.360389 kubelet[2842]: W0424 00:37:24.360431 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.361272 kubelet[2842]: E0424 00:37:24.360838 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.363032 kubelet[2842]: E0424 00:37:24.362029 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.363032 kubelet[2842]: W0424 00:37:24.362042 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.363032 kubelet[2842]: E0424 00:37:24.362053 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.364340 kubelet[2842]: E0424 00:37:24.363538 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.364340 kubelet[2842]: W0424 00:37:24.363550 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.364340 kubelet[2842]: E0424 00:37:24.363561 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.405745 containerd[1570]: time="2026-04-24T00:37:24.405535410Z" level=info msg="connecting to shim a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac" address="unix:///run/containerd/s/773931e115e40acf27e471bcb9a64cf410c12b7911761257b06c8ede2160ad5f" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:37:24.472964 kubelet[2842]: E0424 00:37:24.472701 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.472964 kubelet[2842]: W0424 00:37:24.472727 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.472964 kubelet[2842]: E0424 00:37:24.472806 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.474944 kubelet[2842]: E0424 00:37:24.474624 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.474944 kubelet[2842]: W0424 00:37:24.474806 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.474944 kubelet[2842]: E0424 00:37:24.474823 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.477403 kubelet[2842]: E0424 00:37:24.477390 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.478332 kubelet[2842]: W0424 00:37:24.477810 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.478332 kubelet[2842]: E0424 00:37:24.477830 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.478652 kubelet[2842]: I0424 00:37:24.478583 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4753596c-a0a7-4611-9d92-e3a14065926b-varrun\") pod \"csi-node-driver-p4jvm\" (UID: \"4753596c-a0a7-4611-9d92-e3a14065926b\") " pod="calico-system/csi-node-driver-p4jvm" Apr 24 00:37:24.479658 kubelet[2842]: E0424 00:37:24.479646 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.479736 kubelet[2842]: W0424 00:37:24.479725 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.479787 kubelet[2842]: E0424 00:37:24.479777 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.480106 kubelet[2842]: I0424 00:37:24.479841 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4566\" (UniqueName: \"kubernetes.io/projected/4753596c-a0a7-4611-9d92-e3a14065926b-kube-api-access-h4566\") pod \"csi-node-driver-p4jvm\" (UID: \"4753596c-a0a7-4611-9d92-e3a14065926b\") " pod="calico-system/csi-node-driver-p4jvm" Apr 24 00:37:24.480556 kubelet[2842]: E0424 00:37:24.480485 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.481110 kubelet[2842]: W0424 00:37:24.480717 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.481110 kubelet[2842]: E0424 00:37:24.480733 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.486246 kubelet[2842]: E0424 00:37:24.486092 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.487527 kubelet[2842]: W0424 00:37:24.486415 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.487527 kubelet[2842]: E0424 00:37:24.487346 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.488778 kubelet[2842]: E0424 00:37:24.488690 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.488778 kubelet[2842]: W0424 00:37:24.488707 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.488778 kubelet[2842]: E0424 00:37:24.488758 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.493664 kubelet[2842]: E0424 00:37:24.493595 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.493664 kubelet[2842]: W0424 00:37:24.493612 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.493664 kubelet[2842]: E0424 00:37:24.493627 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.494805 kubelet[2842]: E0424 00:37:24.494769 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.494805 kubelet[2842]: W0424 00:37:24.494781 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.494805 kubelet[2842]: E0424 00:37:24.494793 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.500759 kubelet[2842]: E0424 00:37:24.500411 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.500759 kubelet[2842]: W0424 00:37:24.500532 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.500759 kubelet[2842]: E0424 00:37:24.500631 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.506416 kubelet[2842]: E0424 00:37:24.506274 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.506416 kubelet[2842]: W0424 00:37:24.506294 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.506416 kubelet[2842]: E0424 00:37:24.506371 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.508309 kubelet[2842]: E0424 00:37:24.508090 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.508309 kubelet[2842]: W0424 00:37:24.508190 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.508309 kubelet[2842]: E0424 00:37:24.508207 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.509635 kubelet[2842]: E0424 00:37:24.509596 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.509635 kubelet[2842]: W0424 00:37:24.509607 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.509635 kubelet[2842]: E0424 00:37:24.509623 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.524054 kubelet[2842]: E0424 00:37:24.521812 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.526475 kubelet[2842]: W0424 00:37:24.525181 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.526475 kubelet[2842]: E0424 00:37:24.525317 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.530434 kubelet[2842]: E0424 00:37:24.530213 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.530434 kubelet[2842]: W0424 00:37:24.530240 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.531184 kubelet[2842]: E0424 00:37:24.530950 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.532211 kubelet[2842]: E0424 00:37:24.531703 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.532211 kubelet[2842]: W0424 00:37:24.531715 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.532211 kubelet[2842]: E0424 00:37:24.531731 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.537997 kubelet[2842]: E0424 00:37:24.537517 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.540949 kubelet[2842]: W0424 00:37:24.538527 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.541687 kubelet[2842]: E0424 00:37:24.541514 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.544117 kubelet[2842]: E0424 00:37:24.543553 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.546297 kubelet[2842]: W0424 00:37:24.546241 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.546613 kubelet[2842]: E0424 00:37:24.546539 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.553674 kubelet[2842]: E0424 00:37:24.552967 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.555981 kubelet[2842]: W0424 00:37:24.555549 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.557311 kubelet[2842]: E0424 00:37:24.556693 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.558713 kubelet[2842]: E0424 00:37:24.558516 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.558800 kubelet[2842]: W0424 00:37:24.558788 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.559475 kubelet[2842]: E0424 00:37:24.559458 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.572286 kubelet[2842]: E0424 00:37:24.572258 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.572680 kubelet[2842]: W0424 00:37:24.572522 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.572680 kubelet[2842]: E0424 00:37:24.572582 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.669728 systemd[1]: Started cri-containerd-a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac.scope - libcontainer container a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac. Apr 24 00:37:24.682632 kubelet[2842]: E0424 00:37:24.682569 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.686347 kubelet[2842]: W0424 00:37:24.685621 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.686347 kubelet[2842]: E0424 00:37:24.685659 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.696371 kubelet[2842]: E0424 00:37:24.696262 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.696371 kubelet[2842]: W0424 00:37:24.696324 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.696371 kubelet[2842]: E0424 00:37:24.696364 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.698216 kubelet[2842]: E0424 00:37:24.697991 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.698216 kubelet[2842]: W0424 00:37:24.698044 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.698216 kubelet[2842]: E0424 00:37:24.698066 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.704986 kubelet[2842]: E0424 00:37:24.703600 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.704986 kubelet[2842]: W0424 00:37:24.703723 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.706232 kubelet[2842]: E0424 00:37:24.703840 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.708353 kubelet[2842]: E0424 00:37:24.708280 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.708353 kubelet[2842]: W0424 00:37:24.708345 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.708562 kubelet[2842]: E0424 00:37:24.708363 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.709485 kubelet[2842]: E0424 00:37:24.709115 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.709485 kubelet[2842]: W0424 00:37:24.709194 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.709485 kubelet[2842]: E0424 00:37:24.709208 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.715195 kubelet[2842]: E0424 00:37:24.714616 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.728099 kubelet[2842]: W0424 00:37:24.726973 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.728099 kubelet[2842]: E0424 00:37:24.727197 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.730958 kubelet[2842]: E0424 00:37:24.730332 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.730958 kubelet[2842]: W0424 00:37:24.730376 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.730958 kubelet[2842]: E0424 00:37:24.730427 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.730958 kubelet[2842]: E0424 00:37:24.730698 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.730958 kubelet[2842]: W0424 00:37:24.730705 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.730958 kubelet[2842]: E0424 00:37:24.730714 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.731245 kubelet[2842]: E0424 00:37:24.731201 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.731245 kubelet[2842]: W0424 00:37:24.731210 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.731245 kubelet[2842]: E0424 00:37:24.731221 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.789835 kubelet[2842]: E0424 00:37:24.789404 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:24.789835 kubelet[2842]: W0424 00:37:24.789559 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:24.789835 kubelet[2842]: E0424 00:37:24.789650 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:24.872569 containerd[1570]: time="2026-04-24T00:37:24.872423779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-249n5,Uid:d1b23bf3-e49e-4c90-a9d7-196b32f35107,Namespace:calico-system,Attempt:0,} returns sandbox id \"a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac\"" Apr 24 00:37:26.134651 kubelet[2842]: E0424 00:37:26.134549 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:26.410449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4106642215.mount: Deactivated successfully. Apr 24 00:37:27.987382 containerd[1570]: time="2026-04-24T00:37:27.987244197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:37:27.988753 containerd[1570]: time="2026-04-24T00:37:27.988729815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.5: active requests=0, bytes read=35813139" Apr 24 00:37:27.992195 containerd[1570]: time="2026-04-24T00:37:27.992088521Z" level=info msg="ImageCreate event name:\"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:37:28.008002 containerd[1570]: time="2026-04-24T00:37:28.007784971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:37:28.009010 containerd[1570]: time="2026-04-24T00:37:28.008955481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.5\" with image id \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\", size \"35812993\" in 3.897223883s" Apr 24 00:37:28.009063 containerd[1570]: time="2026-04-24T00:37:28.009018757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\" returns image reference \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\"" Apr 24 00:37:28.019637 containerd[1570]: time="2026-04-24T00:37:28.019531181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\"" Apr 24 00:37:28.062447 containerd[1570]: time="2026-04-24T00:37:28.062277449Z" level=info msg="CreateContainer within sandbox \"07b48ba5c45cba73c5e85d185e53b3435d44008796c7c8ec3f29ccda0aee5f89\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 00:37:28.083409 containerd[1570]: time="2026-04-24T00:37:28.083373145Z" level=info msg="Container 6786e0081de57d9a8fc399f338264b2054c11fd8f394c437ccd9fc41edcf76ed: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:37:28.106814 containerd[1570]: time="2026-04-24T00:37:28.106698241Z" level=info msg="CreateContainer within sandbox \"07b48ba5c45cba73c5e85d185e53b3435d44008796c7c8ec3f29ccda0aee5f89\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6786e0081de57d9a8fc399f338264b2054c11fd8f394c437ccd9fc41edcf76ed\"" Apr 24 00:37:28.110028 containerd[1570]: time="2026-04-24T00:37:28.109737867Z" level=info msg="StartContainer for \"6786e0081de57d9a8fc399f338264b2054c11fd8f394c437ccd9fc41edcf76ed\"" Apr 24 00:37:28.111523 containerd[1570]: time="2026-04-24T00:37:28.111502152Z" level=info msg="connecting to shim 6786e0081de57d9a8fc399f338264b2054c11fd8f394c437ccd9fc41edcf76ed" address="unix:///run/containerd/s/8965185261d0c7228afda094bdd8ae198a84a3c5a070ebbd40802b272fd670b0" protocol=ttrpc version=3 Apr 24 00:37:28.162380 kubelet[2842]: E0424 00:37:28.162274 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:28.177784 systemd[1]: Started cri-containerd-6786e0081de57d9a8fc399f338264b2054c11fd8f394c437ccd9fc41edcf76ed.scope - libcontainer container 6786e0081de57d9a8fc399f338264b2054c11fd8f394c437ccd9fc41edcf76ed. Apr 24 00:37:28.293043 containerd[1570]: time="2026-04-24T00:37:28.290256304Z" level=info msg="StartContainer for \"6786e0081de57d9a8fc399f338264b2054c11fd8f394c437ccd9fc41edcf76ed\" returns successfully" Apr 24 00:37:28.576772 kubelet[2842]: E0424 00:37:28.576288 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:37:28.667503 kubelet[2842]: E0424 00:37:28.666791 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.670616 kubelet[2842]: W0424 00:37:28.669707 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.673337 kubelet[2842]: E0424 00:37:28.673235 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.676443 kubelet[2842]: E0424 00:37:28.676329 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.677525 kubelet[2842]: W0424 00:37:28.677095 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.677525 kubelet[2842]: E0424 00:37:28.677330 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.678334 kubelet[2842]: E0424 00:37:28.678314 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.678334 kubelet[2842]: W0424 00:37:28.678323 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.678334 kubelet[2842]: E0424 00:37:28.678333 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.682777 kubelet[2842]: E0424 00:37:28.681393 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.682777 kubelet[2842]: I0424 00:37:28.681673 2842 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-8db759cbc-2dt49" podStartSLOduration=3.685462331 podStartE2EDuration="7.681556744s" podCreationTimestamp="2026-04-24 00:37:21 +0000 UTC" firstStartedPulling="2026-04-24 00:37:24.022755027 +0000 UTC m=+35.410225483" lastFinishedPulling="2026-04-24 00:37:28.018849437 +0000 UTC m=+39.406319896" observedRunningTime="2026-04-24 00:37:28.675073626 +0000 UTC m=+40.062544091" watchObservedRunningTime="2026-04-24 00:37:28.681556744 +0000 UTC m=+40.069027201" Apr 24 00:37:28.682777 kubelet[2842]: W0424 00:37:28.681669 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.682777 kubelet[2842]: E0424 00:37:28.682668 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.685333 kubelet[2842]: E0424 00:37:28.684446 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.685333 kubelet[2842]: W0424 00:37:28.684454 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.685333 kubelet[2842]: E0424 00:37:28.684463 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.685333 kubelet[2842]: E0424 00:37:28.685257 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.685444 kubelet[2842]: W0424 00:37:28.685376 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.685444 kubelet[2842]: E0424 00:37:28.685385 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.686381 kubelet[2842]: E0424 00:37:28.686062 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.686381 kubelet[2842]: W0424 00:37:28.686222 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.686381 kubelet[2842]: E0424 00:37:28.686231 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.687006 kubelet[2842]: E0424 00:37:28.686835 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.687006 kubelet[2842]: W0424 00:37:28.686974 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.687006 kubelet[2842]: E0424 00:37:28.686981 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.687376 kubelet[2842]: E0424 00:37:28.687348 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.687376 kubelet[2842]: W0424 00:37:28.687356 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.687376 kubelet[2842]: E0424 00:37:28.687363 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.688205 kubelet[2842]: E0424 00:37:28.687979 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.688205 kubelet[2842]: W0424 00:37:28.688021 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.688205 kubelet[2842]: E0424 00:37:28.688029 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.688967 kubelet[2842]: E0424 00:37:28.688824 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.688967 kubelet[2842]: W0424 00:37:28.688833 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.688967 kubelet[2842]: E0424 00:37:28.688840 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.689471 kubelet[2842]: E0424 00:37:28.689321 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.689471 kubelet[2842]: W0424 00:37:28.689364 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.689471 kubelet[2842]: E0424 00:37:28.689372 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.689471 kubelet[2842]: E0424 00:37:28.689469 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.689471 kubelet[2842]: W0424 00:37:28.689473 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.689660 kubelet[2842]: E0424 00:37:28.689478 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.689723 kubelet[2842]: E0424 00:37:28.689666 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.689723 kubelet[2842]: W0424 00:37:28.689671 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.689723 kubelet[2842]: E0424 00:37:28.689676 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.690169 kubelet[2842]: E0424 00:37:28.690043 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.690169 kubelet[2842]: W0424 00:37:28.690087 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.690169 kubelet[2842]: E0424 00:37:28.690093 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.789444 kubelet[2842]: E0424 00:37:28.789002 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.789444 kubelet[2842]: W0424 00:37:28.789037 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.789444 kubelet[2842]: E0424 00:37:28.789096 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.792594 kubelet[2842]: E0424 00:37:28.792535 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.794424 kubelet[2842]: W0424 00:37:28.793044 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.794424 kubelet[2842]: E0424 00:37:28.793122 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.795095 kubelet[2842]: E0424 00:37:28.794989 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.795095 kubelet[2842]: W0424 00:37:28.795063 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.795246 kubelet[2842]: E0424 00:37:28.795099 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.796040 kubelet[2842]: E0424 00:37:28.795842 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.796040 kubelet[2842]: W0424 00:37:28.795955 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.796040 kubelet[2842]: E0424 00:37:28.795966 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.796843 kubelet[2842]: E0424 00:37:28.796776 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.796843 kubelet[2842]: W0424 00:37:28.796821 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.796843 kubelet[2842]: E0424 00:37:28.796830 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.809117 kubelet[2842]: E0424 00:37:28.809020 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.809117 kubelet[2842]: W0424 00:37:28.809097 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.809117 kubelet[2842]: E0424 00:37:28.809113 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.832659 kubelet[2842]: E0424 00:37:28.828043 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.832659 kubelet[2842]: W0424 00:37:28.828102 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.832659 kubelet[2842]: E0424 00:37:28.828171 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.832659 kubelet[2842]: E0424 00:37:28.828633 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.832659 kubelet[2842]: W0424 00:37:28.828641 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.832659 kubelet[2842]: E0424 00:37:28.828679 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.837575 kubelet[2842]: E0424 00:37:28.837475 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.837575 kubelet[2842]: W0424 00:37:28.837569 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.837748 kubelet[2842]: E0424 00:37:28.837606 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.841518 kubelet[2842]: E0424 00:37:28.840430 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.841518 kubelet[2842]: W0424 00:37:28.840441 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.841518 kubelet[2842]: E0424 00:37:28.840453 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.841518 kubelet[2842]: E0424 00:37:28.840964 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.841518 kubelet[2842]: W0424 00:37:28.841003 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.841518 kubelet[2842]: E0424 00:37:28.841016 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.841518 kubelet[2842]: E0424 00:37:28.841300 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.841518 kubelet[2842]: W0424 00:37:28.841306 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.841518 kubelet[2842]: E0424 00:37:28.841315 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.841703 kubelet[2842]: E0424 00:37:28.841528 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.841703 kubelet[2842]: W0424 00:37:28.841536 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.841703 kubelet[2842]: E0424 00:37:28.841542 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.841772 kubelet[2842]: E0424 00:37:28.841765 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.841772 kubelet[2842]: W0424 00:37:28.841770 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.841804 kubelet[2842]: E0424 00:37:28.841777 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.842042 kubelet[2842]: E0424 00:37:28.841987 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.842042 kubelet[2842]: W0424 00:37:28.842034 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.842090 kubelet[2842]: E0424 00:37:28.842044 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.842241 kubelet[2842]: E0424 00:37:28.842189 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.842241 kubelet[2842]: W0424 00:37:28.842234 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.842286 kubelet[2842]: E0424 00:37:28.842250 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.842534 kubelet[2842]: E0424 00:37:28.842460 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.842534 kubelet[2842]: W0424 00:37:28.842519 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.842534 kubelet[2842]: E0424 00:37:28.842527 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:28.843321 kubelet[2842]: E0424 00:37:28.843260 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:28.843321 kubelet[2842]: W0424 00:37:28.843304 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:28.843321 kubelet[2842]: E0424 00:37:28.843313 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.579799 kubelet[2842]: E0424 00:37:29.579713 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:37:29.609375 kubelet[2842]: E0424 00:37:29.609289 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.609375 kubelet[2842]: W0424 00:37:29.609364 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.610260 kubelet[2842]: E0424 00:37:29.609412 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.612844 kubelet[2842]: E0424 00:37:29.611807 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.613951 kubelet[2842]: W0424 00:37:29.613673 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.614082 kubelet[2842]: E0424 00:37:29.613991 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.617971 kubelet[2842]: E0424 00:37:29.617368 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.617971 kubelet[2842]: W0424 00:37:29.617396 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.617971 kubelet[2842]: E0424 00:37:29.617553 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.617971 kubelet[2842]: E0424 00:37:29.617759 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.617971 kubelet[2842]: W0424 00:37:29.617765 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.617971 kubelet[2842]: E0424 00:37:29.617773 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.621094 kubelet[2842]: E0424 00:37:29.620546 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.621402 kubelet[2842]: W0424 00:37:29.621175 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.621402 kubelet[2842]: E0424 00:37:29.621228 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.622614 kubelet[2842]: E0424 00:37:29.622325 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.622614 kubelet[2842]: W0424 00:37:29.622364 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.622614 kubelet[2842]: E0424 00:37:29.622374 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.623600 kubelet[2842]: E0424 00:37:29.623410 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.623840 kubelet[2842]: W0424 00:37:29.623610 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.623840 kubelet[2842]: E0424 00:37:29.623712 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.624675 kubelet[2842]: E0424 00:37:29.624416 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.624675 kubelet[2842]: W0424 00:37:29.624517 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.624675 kubelet[2842]: E0424 00:37:29.624526 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.624675 kubelet[2842]: E0424 00:37:29.624644 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.624675 kubelet[2842]: W0424 00:37:29.624648 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.624675 kubelet[2842]: E0424 00:37:29.624654 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.626107 kubelet[2842]: E0424 00:37:29.625984 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.626107 kubelet[2842]: W0424 00:37:29.625992 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.626107 kubelet[2842]: E0424 00:37:29.626005 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.626951 kubelet[2842]: E0424 00:37:29.626796 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.626951 kubelet[2842]: W0424 00:37:29.626808 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.626951 kubelet[2842]: E0424 00:37:29.626820 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.627296 kubelet[2842]: E0424 00:37:29.627221 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.627296 kubelet[2842]: W0424 00:37:29.627233 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.627296 kubelet[2842]: E0424 00:37:29.627243 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.628939 kubelet[2842]: E0424 00:37:29.627943 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.628939 kubelet[2842]: W0424 00:37:29.627955 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.628939 kubelet[2842]: E0424 00:37:29.627964 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.628939 kubelet[2842]: E0424 00:37:29.628072 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.628939 kubelet[2842]: W0424 00:37:29.628077 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.628939 kubelet[2842]: E0424 00:37:29.628083 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.628939 kubelet[2842]: E0424 00:37:29.628231 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.628939 kubelet[2842]: W0424 00:37:29.628237 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.628939 kubelet[2842]: E0424 00:37:29.628244 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.636570 kubelet[2842]: E0424 00:37:29.636522 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.636780 kubelet[2842]: W0424 00:37:29.636579 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.636780 kubelet[2842]: E0424 00:37:29.636597 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.637516 kubelet[2842]: E0424 00:37:29.637475 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.637516 kubelet[2842]: W0424 00:37:29.637486 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.637516 kubelet[2842]: E0424 00:37:29.637497 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.638747 kubelet[2842]: E0424 00:37:29.638552 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.638747 kubelet[2842]: W0424 00:37:29.638610 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.638747 kubelet[2842]: E0424 00:37:29.638621 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.639747 kubelet[2842]: E0424 00:37:29.639586 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.639747 kubelet[2842]: W0424 00:37:29.639636 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.639747 kubelet[2842]: E0424 00:37:29.639646 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.640390 kubelet[2842]: E0424 00:37:29.640360 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.640390 kubelet[2842]: W0424 00:37:29.640371 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.640832 kubelet[2842]: E0424 00:37:29.640625 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.641344 kubelet[2842]: E0424 00:37:29.641265 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.641611 kubelet[2842]: W0424 00:37:29.641565 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.641611 kubelet[2842]: E0424 00:37:29.641611 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.642838 kubelet[2842]: E0424 00:37:29.642428 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.642838 kubelet[2842]: W0424 00:37:29.642440 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.642838 kubelet[2842]: E0424 00:37:29.642451 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.642838 kubelet[2842]: E0424 00:37:29.642727 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.642838 kubelet[2842]: W0424 00:37:29.642733 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.642838 kubelet[2842]: E0424 00:37:29.642740 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.643207 kubelet[2842]: E0424 00:37:29.643175 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.643207 kubelet[2842]: W0424 00:37:29.643182 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.643207 kubelet[2842]: E0424 00:37:29.643188 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.643533 kubelet[2842]: E0424 00:37:29.643455 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.643643 kubelet[2842]: W0424 00:37:29.643600 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.643643 kubelet[2842]: E0424 00:37:29.643640 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.644266 kubelet[2842]: E0424 00:37:29.644227 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.644266 kubelet[2842]: W0424 00:37:29.644267 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.644327 kubelet[2842]: E0424 00:37:29.644275 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.644594 kubelet[2842]: E0424 00:37:29.644545 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.644594 kubelet[2842]: W0424 00:37:29.644583 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.644594 kubelet[2842]: E0424 00:37:29.644589 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.645297 kubelet[2842]: E0424 00:37:29.644967 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.645297 kubelet[2842]: W0424 00:37:29.644972 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.645297 kubelet[2842]: E0424 00:37:29.644978 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.646707 kubelet[2842]: E0424 00:37:29.646502 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.648051 kubelet[2842]: W0424 00:37:29.646796 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.648566 kubelet[2842]: E0424 00:37:29.648323 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.651607 kubelet[2842]: E0424 00:37:29.651562 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.651649 kubelet[2842]: W0424 00:37:29.651608 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.651649 kubelet[2842]: E0424 00:37:29.651622 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.658178 kubelet[2842]: E0424 00:37:29.658107 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.658178 kubelet[2842]: W0424 00:37:29.658125 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.658516 kubelet[2842]: E0424 00:37:29.658219 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.661626 kubelet[2842]: E0424 00:37:29.661417 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.661626 kubelet[2842]: W0424 00:37:29.661466 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.661626 kubelet[2842]: E0424 00:37:29.661477 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.662429 kubelet[2842]: E0424 00:37:29.662194 2842 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:37:29.662429 kubelet[2842]: W0424 00:37:29.662248 2842 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:37:29.662429 kubelet[2842]: E0424 00:37:29.662260 2842 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:37:29.955252 containerd[1570]: time="2026-04-24T00:37:29.954725587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:37:29.958634 containerd[1570]: time="2026-04-24T00:37:29.957989136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5: active requests=0, bytes read=4601981" Apr 24 00:37:29.961043 containerd[1570]: time="2026-04-24T00:37:29.960986427Z" level=info msg="ImageCreate event name:\"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:37:29.963172 containerd[1570]: time="2026-04-24T00:37:29.963033954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:37:29.966015 containerd[1570]: time="2026-04-24T00:37:29.965771855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" with image id \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\", size \"7563366\" in 1.946214205s" Apr 24 00:37:29.966015 containerd[1570]: time="2026-04-24T00:37:29.965964966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" returns image reference \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\"" Apr 24 00:37:29.978099 containerd[1570]: time="2026-04-24T00:37:29.978016963Z" level=info msg="CreateContainer within sandbox \"a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 00:37:29.992989 containerd[1570]: time="2026-04-24T00:37:29.992581689Z" level=info msg="Container a8200e6bcfa148b3ef2539f113364c31e06af6d366cf6ecacb03cff640fb6385: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:37:30.014372 containerd[1570]: time="2026-04-24T00:37:30.014271330Z" level=info msg="CreateContainer within sandbox \"a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a8200e6bcfa148b3ef2539f113364c31e06af6d366cf6ecacb03cff640fb6385\"" Apr 24 00:37:30.023963 containerd[1570]: time="2026-04-24T00:37:30.021728237Z" level=info msg="StartContainer for \"a8200e6bcfa148b3ef2539f113364c31e06af6d366cf6ecacb03cff640fb6385\"" Apr 24 00:37:30.031057 containerd[1570]: time="2026-04-24T00:37:30.030963434Z" level=info msg="connecting to shim a8200e6bcfa148b3ef2539f113364c31e06af6d366cf6ecacb03cff640fb6385" address="unix:///run/containerd/s/773931e115e40acf27e471bcb9a64cf410c12b7911761257b06c8ede2160ad5f" protocol=ttrpc version=3 Apr 24 00:37:30.121566 systemd[1]: Started cri-containerd-a8200e6bcfa148b3ef2539f113364c31e06af6d366cf6ecacb03cff640fb6385.scope - libcontainer container a8200e6bcfa148b3ef2539f113364c31e06af6d366cf6ecacb03cff640fb6385. Apr 24 00:37:30.184787 kubelet[2842]: E0424 00:37:30.184747 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:30.390963 containerd[1570]: time="2026-04-24T00:37:30.390188129Z" level=info msg="StartContainer for \"a8200e6bcfa148b3ef2539f113364c31e06af6d366cf6ecacb03cff640fb6385\" returns successfully" Apr 24 00:37:30.416627 systemd[1]: cri-containerd-a8200e6bcfa148b3ef2539f113364c31e06af6d366cf6ecacb03cff640fb6385.scope: Deactivated successfully. Apr 24 00:37:30.418571 systemd[1]: cri-containerd-a8200e6bcfa148b3ef2539f113364c31e06af6d366cf6ecacb03cff640fb6385.scope: Consumed 97ms CPU time, 6.1M memory peak, 4.1M written to disk. Apr 24 00:37:30.438446 containerd[1570]: time="2026-04-24T00:37:30.438346149Z" level=info msg="received container exit event container_id:\"a8200e6bcfa148b3ef2539f113364c31e06af6d366cf6ecacb03cff640fb6385\" id:\"a8200e6bcfa148b3ef2539f113364c31e06af6d366cf6ecacb03cff640fb6385\" pid:3613 exited_at:{seconds:1776991050 nanos:436435079}" Apr 24 00:37:30.512726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8200e6bcfa148b3ef2539f113364c31e06af6d366cf6ecacb03cff640fb6385-rootfs.mount: Deactivated successfully. Apr 24 00:37:30.640374 kubelet[2842]: E0424 00:37:30.639995 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:37:31.701424 containerd[1570]: time="2026-04-24T00:37:31.701330248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\"" Apr 24 00:37:32.125312 kubelet[2842]: E0424 00:37:32.124608 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:34.126772 kubelet[2842]: E0424 00:37:34.126404 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:36.216481 kubelet[2842]: E0424 00:37:36.214336 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:38.157253 kubelet[2842]: E0424 00:37:38.156803 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:40.127967 kubelet[2842]: E0424 00:37:40.127082 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:42.142439 kubelet[2842]: E0424 00:37:42.140154 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:44.138434 kubelet[2842]: E0424 00:37:44.136441 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:46.163947 kubelet[2842]: E0424 00:37:46.163486 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:48.137346 kubelet[2842]: E0424 00:37:48.137099 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:50.157362 kubelet[2842]: E0424 00:37:50.156759 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:52.182494 kubelet[2842]: E0424 00:37:52.176681 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:54.191044 kubelet[2842]: E0424 00:37:54.189756 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:56.205427 kubelet[2842]: E0424 00:37:56.197840 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:37:57.194095 kubelet[2842]: E0424 00:37:57.189124 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:37:58.213485 kubelet[2842]: E0424 00:37:58.212581 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:00.179709 kubelet[2842]: E0424 00:38:00.179619 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:02.165037 kubelet[2842]: E0424 00:38:02.164684 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:02.166295 kubelet[2842]: E0424 00:38:02.165366 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:38:03.161183 kubelet[2842]: E0424 00:38:03.161038 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:38:04.135595 kubelet[2842]: E0424 00:38:04.134682 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:06.172057 kubelet[2842]: E0424 00:38:06.170779 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:07.865578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522920009.mount: Deactivated successfully. Apr 24 00:38:08.107363 containerd[1570]: time="2026-04-24T00:38:08.107218773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:38:08.109431 containerd[1570]: time="2026-04-24T00:38:08.109348599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.5: active requests=0, bytes read=159374404" Apr 24 00:38:08.125803 kubelet[2842]: E0424 00:38:08.125604 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:08.135668 containerd[1570]: time="2026-04-24T00:38:08.132618948Z" level=info msg="ImageCreate event name:\"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:38:08.258467 containerd[1570]: time="2026-04-24T00:38:08.257789987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:38:08.269320 containerd[1570]: time="2026-04-24T00:38:08.269154620Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.5\" with image id \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\", size \"159374266\" in 36.567741669s" Apr 24 00:38:08.271038 containerd[1570]: time="2026-04-24T00:38:08.270997721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\" returns image reference \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\"" Apr 24 00:38:08.403676 containerd[1570]: time="2026-04-24T00:38:08.403083906Z" level=info msg="CreateContainer within sandbox \"a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 00:38:08.571402 containerd[1570]: time="2026-04-24T00:38:08.571122297Z" level=info msg="Container e14bf95c66a9021fa96a55316a15156e7262c4bee2b8aa9981dc8b91b0b256d7: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:38:08.682131 containerd[1570]: time="2026-04-24T00:38:08.681513510Z" level=info msg="CreateContainer within sandbox \"a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"e14bf95c66a9021fa96a55316a15156e7262c4bee2b8aa9981dc8b91b0b256d7\"" Apr 24 00:38:08.696020 containerd[1570]: time="2026-04-24T00:38:08.695848697Z" level=info msg="StartContainer for \"e14bf95c66a9021fa96a55316a15156e7262c4bee2b8aa9981dc8b91b0b256d7\"" Apr 24 00:38:08.712674 containerd[1570]: time="2026-04-24T00:38:08.712548714Z" level=info msg="connecting to shim e14bf95c66a9021fa96a55316a15156e7262c4bee2b8aa9981dc8b91b0b256d7" address="unix:///run/containerd/s/773931e115e40acf27e471bcb9a64cf410c12b7911761257b06c8ede2160ad5f" protocol=ttrpc version=3 Apr 24 00:38:08.939373 systemd[1]: Started cri-containerd-e14bf95c66a9021fa96a55316a15156e7262c4bee2b8aa9981dc8b91b0b256d7.scope - libcontainer container e14bf95c66a9021fa96a55316a15156e7262c4bee2b8aa9981dc8b91b0b256d7. Apr 24 00:38:09.394813 containerd[1570]: time="2026-04-24T00:38:09.394174512Z" level=info msg="StartContainer for \"e14bf95c66a9021fa96a55316a15156e7262c4bee2b8aa9981dc8b91b0b256d7\" returns successfully" Apr 24 00:38:10.127045 kubelet[2842]: E0424 00:38:10.126656 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:10.699213 systemd[1]: cri-containerd-e14bf95c66a9021fa96a55316a15156e7262c4bee2b8aa9981dc8b91b0b256d7.scope: Deactivated successfully. Apr 24 00:38:10.778175 containerd[1570]: time="2026-04-24T00:38:10.777971991Z" level=info msg="received container exit event container_id:\"e14bf95c66a9021fa96a55316a15156e7262c4bee2b8aa9981dc8b91b0b256d7\" id:\"e14bf95c66a9021fa96a55316a15156e7262c4bee2b8aa9981dc8b91b0b256d7\" pid:3674 exited_at:{seconds:1776991090 nanos:750414873}" Apr 24 00:38:11.134025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e14bf95c66a9021fa96a55316a15156e7262c4bee2b8aa9981dc8b91b0b256d7-rootfs.mount: Deactivated successfully. Apr 24 00:38:12.130015 kubelet[2842]: E0424 00:38:12.128717 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:12.352987 containerd[1570]: time="2026-04-24T00:38:12.352532839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\"" Apr 24 00:38:13.170771 kubelet[2842]: E0424 00:38:13.169981 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:38:14.133552 kubelet[2842]: E0424 00:38:14.132182 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:16.126507 kubelet[2842]: E0424 00:38:16.126420 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:18.148175 kubelet[2842]: E0424 00:38:18.147388 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:20.134418 kubelet[2842]: E0424 00:38:20.134147 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:22.142989 kubelet[2842]: E0424 00:38:22.136507 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:24.187798 kubelet[2842]: E0424 00:38:24.184748 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:26.125683 kubelet[2842]: E0424 00:38:26.125536 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:27.355802 kubelet[2842]: E0424 00:38:27.352259 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:29.247983 kubelet[2842]: E0424 00:38:29.243287 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:31.129192 kubelet[2842]: E0424 00:38:31.127850 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:33.196713 kubelet[2842]: E0424 00:38:33.193061 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:35.189616 kubelet[2842]: E0424 00:38:35.189510 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:35.975925 containerd[1570]: time="2026-04-24T00:38:35.975577477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:38:35.978361 containerd[1570]: time="2026-04-24T00:38:35.978206882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.5: active requests=0, bytes read=67713351" Apr 24 00:38:35.991951 containerd[1570]: time="2026-04-24T00:38:35.991173490Z" level=info msg="ImageCreate event name:\"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:38:36.012913 containerd[1570]: time="2026-04-24T00:38:36.012713050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:38:36.022955 containerd[1570]: time="2026-04-24T00:38:36.018022603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.5\" with image id \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\", size \"70674776\" in 23.665384111s" Apr 24 00:38:36.062124 containerd[1570]: time="2026-04-24T00:38:36.023560484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\" returns image reference \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\"" Apr 24 00:38:36.320315 containerd[1570]: time="2026-04-24T00:38:36.320203869Z" level=info msg="CreateContainer within sandbox \"a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 00:38:36.471823 containerd[1570]: time="2026-04-24T00:38:36.470202307Z" level=info msg="Container 6aa9d7eb7b3a698433a1d238cb642909ffa81ff1e18798400920675af0614dcb: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:38:36.553236 containerd[1570]: time="2026-04-24T00:38:36.553163079Z" level=info msg="CreateContainer within sandbox \"a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6aa9d7eb7b3a698433a1d238cb642909ffa81ff1e18798400920675af0614dcb\"" Apr 24 00:38:36.561713 containerd[1570]: time="2026-04-24T00:38:36.561449218Z" level=info msg="StartContainer for \"6aa9d7eb7b3a698433a1d238cb642909ffa81ff1e18798400920675af0614dcb\"" Apr 24 00:38:36.568309 containerd[1570]: time="2026-04-24T00:38:36.568171317Z" level=info msg="connecting to shim 6aa9d7eb7b3a698433a1d238cb642909ffa81ff1e18798400920675af0614dcb" address="unix:///run/containerd/s/773931e115e40acf27e471bcb9a64cf410c12b7911761257b06c8ede2160ad5f" protocol=ttrpc version=3 Apr 24 00:38:36.766844 systemd[1]: Started cri-containerd-6aa9d7eb7b3a698433a1d238cb642909ffa81ff1e18798400920675af0614dcb.scope - libcontainer container 6aa9d7eb7b3a698433a1d238cb642909ffa81ff1e18798400920675af0614dcb. Apr 24 00:38:37.181953 kubelet[2842]: E0424 00:38:37.180841 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:37.541033 containerd[1570]: time="2026-04-24T00:38:37.540384583Z" level=info msg="StartContainer for \"6aa9d7eb7b3a698433a1d238cb642909ffa81ff1e18798400920675af0614dcb\" returns successfully" Apr 24 00:38:39.214715 kubelet[2842]: E0424 00:38:39.207815 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:41.179905 kubelet[2842]: E0424 00:38:41.179119 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:43.183947 kubelet[2842]: E0424 00:38:43.183831 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:43.188785 kubelet[2842]: E0424 00:38:43.184740 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:38:44.856903 systemd[1]: cri-containerd-6aa9d7eb7b3a698433a1d238cb642909ffa81ff1e18798400920675af0614dcb.scope: Deactivated successfully. Apr 24 00:38:44.857600 systemd[1]: cri-containerd-6aa9d7eb7b3a698433a1d238cb642909ffa81ff1e18798400920675af0614dcb.scope: Consumed 5.631s CPU time, 177.8M memory peak, 4.1M read from disk, 173.7M written to disk. Apr 24 00:38:44.881918 containerd[1570]: time="2026-04-24T00:38:44.881761376Z" level=info msg="received container exit event container_id:\"6aa9d7eb7b3a698433a1d238cb642909ffa81ff1e18798400920675af0614dcb\" id:\"6aa9d7eb7b3a698433a1d238cb642909ffa81ff1e18798400920675af0614dcb\" pid:3736 exited_at:{seconds:1776991124 nanos:877214597}" Apr 24 00:38:44.971206 kubelet[2842]: I0424 00:38:44.971086 2842 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 24 00:38:45.238161 systemd[1]: Created slice kubepods-besteffort-pod4753596c_a0a7_4611_9d92_e3a14065926b.slice - libcontainer container kubepods-besteffort-pod4753596c_a0a7_4611_9d92_e3a14065926b.slice. Apr 24 00:38:45.342454 containerd[1570]: time="2026-04-24T00:38:45.342336817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p4jvm,Uid:4753596c-a0a7-4611-9d92-e3a14065926b,Namespace:calico-system,Attempt:0,}" Apr 24 00:38:45.378776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aa9d7eb7b3a698433a1d238cb642909ffa81ff1e18798400920675af0614dcb-rootfs.mount: Deactivated successfully. Apr 24 00:38:46.270348 kubelet[2842]: I0424 00:38:46.267917 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfknz\" (UniqueName: \"kubernetes.io/projected/742c480a-2ffe-4ce5-a8d2-bc4cb7575d82-kube-api-access-rfknz\") pod \"coredns-7d764666f9-z2r4v\" (UID: \"742c480a-2ffe-4ce5-a8d2-bc4cb7575d82\") " pod="kube-system/coredns-7d764666f9-z2r4v" Apr 24 00:38:46.283987 kubelet[2842]: I0424 00:38:46.281174 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/742c480a-2ffe-4ce5-a8d2-bc4cb7575d82-config-volume\") pod \"coredns-7d764666f9-z2r4v\" (UID: \"742c480a-2ffe-4ce5-a8d2-bc4cb7575d82\") " pod="kube-system/coredns-7d764666f9-z2r4v" Apr 24 00:38:46.295786 systemd[1]: Created slice kubepods-burstable-pod742c480a_2ffe_4ce5_a8d2_bc4cb7575d82.slice - libcontainer container kubepods-burstable-pod742c480a_2ffe_4ce5_a8d2_bc4cb7575d82.slice. Apr 24 00:38:46.485314 kubelet[2842]: I0424 00:38:46.428793 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/40de480d-0268-4354-82c2-131a37e9e48c-calico-apiserver-certs\") pod \"calico-apiserver-66975fdd9d-5jt89\" (UID: \"40de480d-0268-4354-82c2-131a37e9e48c\") " pod="calico-system/calico-apiserver-66975fdd9d-5jt89" Apr 24 00:38:46.566400 kubelet[2842]: I0424 00:38:46.565698 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z74d9\" (UniqueName: \"kubernetes.io/projected/40de480d-0268-4354-82c2-131a37e9e48c-kube-api-access-z74d9\") pod \"calico-apiserver-66975fdd9d-5jt89\" (UID: \"40de480d-0268-4354-82c2-131a37e9e48c\") " pod="calico-system/calico-apiserver-66975fdd9d-5jt89" Apr 24 00:38:46.568913 kubelet[2842]: I0424 00:38:46.568830 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f2e08c2-9fb6-4155-b7e3-6d9c13e99150-tigera-ca-bundle\") pod \"calico-kube-controllers-c8bc45cf7-shmzv\" (UID: \"2f2e08c2-9fb6-4155-b7e3-6d9c13e99150\") " pod="calico-system/calico-kube-controllers-c8bc45cf7-shmzv" Apr 24 00:38:46.569169 kubelet[2842]: I0424 00:38:46.569003 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhgwk\" (UniqueName: \"kubernetes.io/projected/2f2e08c2-9fb6-4155-b7e3-6d9c13e99150-kube-api-access-rhgwk\") pod \"calico-kube-controllers-c8bc45cf7-shmzv\" (UID: \"2f2e08c2-9fb6-4155-b7e3-6d9c13e99150\") " pod="calico-system/calico-kube-controllers-c8bc45cf7-shmzv" Apr 24 00:38:46.610360 systemd[1]: Created slice kubepods-besteffort-pod40de480d_0268_4354_82c2_131a37e9e48c.slice - libcontainer container kubepods-besteffort-pod40de480d_0268_4354_82c2_131a37e9e48c.slice. Apr 24 00:38:46.894265 kubelet[2842]: I0424 00:38:46.891443 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/093ff803-c52d-48d6-b99b-3a99f012e7a9-calico-apiserver-certs\") pod \"calico-apiserver-66975fdd9d-7fq42\" (UID: \"093ff803-c52d-48d6-b99b-3a99f012e7a9\") " pod="calico-system/calico-apiserver-66975fdd9d-7fq42" Apr 24 00:38:46.894265 kubelet[2842]: I0424 00:38:46.891582 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg5fq\" (UniqueName: \"kubernetes.io/projected/093ff803-c52d-48d6-b99b-3a99f012e7a9-kube-api-access-wg5fq\") pod \"calico-apiserver-66975fdd9d-7fq42\" (UID: \"093ff803-c52d-48d6-b99b-3a99f012e7a9\") " pod="calico-system/calico-apiserver-66975fdd9d-7fq42" Apr 24 00:38:46.894265 kubelet[2842]: I0424 00:38:46.891632 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-nginx-config\") pod \"whisker-57995fc485-nczn8\" (UID: \"8fde9b4d-ddd2-4cff-90fd-c8ea6539628b\") " pod="calico-system/whisker-57995fc485-nczn8" Apr 24 00:38:47.007696 systemd[1]: Created slice kubepods-besteffort-pod2f2e08c2_9fb6_4155_b7e3_6d9c13e99150.slice - libcontainer container kubepods-besteffort-pod2f2e08c2_9fb6_4155_b7e3_6d9c13e99150.slice. Apr 24 00:38:47.270014 containerd[1570]: time="2026-04-24T00:38:47.259457680Z" level=error msg="Failed to destroy network for sandbox \"e42d03a5e56f68c8accec1681f14973ef8f3e674c93be84af01bcf8a6acc7648\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:47.278065 containerd[1570]: time="2026-04-24T00:38:47.273687917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p4jvm,Uid:4753596c-a0a7-4611-9d92-e3a14065926b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e42d03a5e56f68c8accec1681f14973ef8f3e674c93be84af01bcf8a6acc7648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:47.365112 systemd[1]: run-netns-cni\x2da4c4e862\x2dd427\x2d8fa1\x2d1a9c\x2d6be104868fda.mount: Deactivated successfully. Apr 24 00:38:47.369024 kubelet[2842]: I0424 00:38:47.364227 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e790c486-e226-481e-a682-c42b85775d13-config-volume\") pod \"coredns-7d764666f9-dpk5v\" (UID: \"e790c486-e226-481e-a682-c42b85775d13\") " pod="kube-system/coredns-7d764666f9-dpk5v" Apr 24 00:38:47.385778 systemd[1]: Created slice kubepods-besteffort-pod8fde9b4d_ddd2_4cff_90fd_c8ea6539628b.slice - libcontainer container kubepods-besteffort-pod8fde9b4d_ddd2_4cff_90fd_c8ea6539628b.slice. Apr 24 00:38:47.403592 kubelet[2842]: I0424 00:38:47.403264 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0e7964a-87e1-45b9-8f9b-26c7b6d887fe-goldmane-ca-bundle\") pod \"goldmane-7fb6cdc5d9-bxjp2\" (UID: \"e0e7964a-87e1-45b9-8f9b-26c7b6d887fe\") " pod="calico-system/goldmane-7fb6cdc5d9-bxjp2" Apr 24 00:38:47.409950 kubelet[2842]: E0424 00:38:47.409296 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e42d03a5e56f68c8accec1681f14973ef8f3e674c93be84af01bcf8a6acc7648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:47.519040 kubelet[2842]: E0424 00:38:47.517604 2842 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e42d03a5e56f68c8accec1681f14973ef8f3e674c93be84af01bcf8a6acc7648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p4jvm" Apr 24 00:38:47.519040 kubelet[2842]: E0424 00:38:47.518010 2842 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e42d03a5e56f68c8accec1681f14973ef8f3e674c93be84af01bcf8a6acc7648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p4jvm" Apr 24 00:38:47.519040 kubelet[2842]: E0424 00:38:47.518435 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p4jvm_calico-system(4753596c-a0a7-4611-9d92-e3a14065926b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p4jvm_calico-system(4753596c-a0a7-4611-9d92-e3a14065926b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e42d03a5e56f68c8accec1681f14973ef8f3e674c93be84af01bcf8a6acc7648\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p4jvm" podUID="4753596c-a0a7-4611-9d92-e3a14065926b" Apr 24 00:38:47.526137 kubelet[2842]: I0424 00:38:47.521496 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-whisker-backend-key-pair\") pod \"whisker-57995fc485-nczn8\" (UID: \"8fde9b4d-ddd2-4cff-90fd-c8ea6539628b\") " pod="calico-system/whisker-57995fc485-nczn8" Apr 24 00:38:47.526137 kubelet[2842]: I0424 00:38:47.522797 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pgbl\" (UniqueName: \"kubernetes.io/projected/e0e7964a-87e1-45b9-8f9b-26c7b6d887fe-kube-api-access-7pgbl\") pod \"goldmane-7fb6cdc5d9-bxjp2\" (UID: \"e0e7964a-87e1-45b9-8f9b-26c7b6d887fe\") " pod="calico-system/goldmane-7fb6cdc5d9-bxjp2" Apr 24 00:38:47.531904 kubelet[2842]: I0424 00:38:47.527662 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l57qm\" (UniqueName: \"kubernetes.io/projected/e790c486-e226-481e-a682-c42b85775d13-kube-api-access-l57qm\") pod \"coredns-7d764666f9-dpk5v\" (UID: \"e790c486-e226-481e-a682-c42b85775d13\") " pod="kube-system/coredns-7d764666f9-dpk5v" Apr 24 00:38:47.531904 kubelet[2842]: I0424 00:38:47.530627 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66dmt\" (UniqueName: \"kubernetes.io/projected/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-kube-api-access-66dmt\") pod \"whisker-57995fc485-nczn8\" (UID: \"8fde9b4d-ddd2-4cff-90fd-c8ea6539628b\") " pod="calico-system/whisker-57995fc485-nczn8" Apr 24 00:38:47.531904 kubelet[2842]: I0424 00:38:47.530890 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0e7964a-87e1-45b9-8f9b-26c7b6d887fe-config\") pod \"goldmane-7fb6cdc5d9-bxjp2\" (UID: \"e0e7964a-87e1-45b9-8f9b-26c7b6d887fe\") " pod="calico-system/goldmane-7fb6cdc5d9-bxjp2" Apr 24 00:38:47.531904 kubelet[2842]: I0424 00:38:47.530997 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-whisker-ca-bundle\") pod \"whisker-57995fc485-nczn8\" (UID: \"8fde9b4d-ddd2-4cff-90fd-c8ea6539628b\") " pod="calico-system/whisker-57995fc485-nczn8" Apr 24 00:38:47.531904 kubelet[2842]: I0424 00:38:47.531016 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e0e7964a-87e1-45b9-8f9b-26c7b6d887fe-goldmane-key-pair\") pod \"goldmane-7fb6cdc5d9-bxjp2\" (UID: \"e0e7964a-87e1-45b9-8f9b-26c7b6d887fe\") " pod="calico-system/goldmane-7fb6cdc5d9-bxjp2" Apr 24 00:38:47.607131 systemd[1]: Created slice kubepods-besteffort-pod093ff803_c52d_48d6_b99b_3a99f012e7a9.slice - libcontainer container kubepods-besteffort-pod093ff803_c52d_48d6_b99b_3a99f012e7a9.slice. Apr 24 00:38:47.698533 kubelet[2842]: E0424 00:38:47.698458 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:38:47.753014 containerd[1570]: time="2026-04-24T00:38:47.752820646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-z2r4v,Uid:742c480a-2ffe-4ce5-a8d2-bc4cb7575d82,Namespace:kube-system,Attempt:0,}" Apr 24 00:38:48.071990 containerd[1570]: time="2026-04-24T00:38:48.071615368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8bc45cf7-shmzv,Uid:2f2e08c2-9fb6-4155-b7e3-6d9c13e99150,Namespace:calico-system,Attempt:0,}" Apr 24 00:38:48.417650 systemd[1]: Created slice kubepods-besteffort-pode0e7964a_87e1_45b9_8f9b_26c7b6d887fe.slice - libcontainer container kubepods-besteffort-pode0e7964a_87e1_45b9_8f9b_26c7b6d887fe.slice. Apr 24 00:38:48.500710 systemd[1]: Created slice kubepods-burstable-pode790c486_e226_481e_a682_c42b85775d13.slice - libcontainer container kubepods-burstable-pode790c486_e226_481e_a682_c42b85775d13.slice. Apr 24 00:38:48.627065 containerd[1570]: time="2026-04-24T00:38:48.626989986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66975fdd9d-5jt89,Uid:40de480d-0268-4354-82c2-131a37e9e48c,Namespace:calico-system,Attempt:0,}" Apr 24 00:38:48.663085 containerd[1570]: time="2026-04-24T00:38:48.629077578Z" level=info msg="CreateContainer within sandbox \"a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 00:38:48.941114 containerd[1570]: time="2026-04-24T00:38:48.940756404Z" level=info msg="Container 79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:38:49.122515 containerd[1570]: time="2026-04-24T00:38:49.122469103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66975fdd9d-7fq42,Uid:093ff803-c52d-48d6-b99b-3a99f012e7a9,Namespace:calico-system,Attempt:0,}" Apr 24 00:38:49.320582 containerd[1570]: time="2026-04-24T00:38:49.320309959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57995fc485-nczn8,Uid:8fde9b4d-ddd2-4cff-90fd-c8ea6539628b,Namespace:calico-system,Attempt:0,}" Apr 24 00:38:49.353422 kubelet[2842]: E0424 00:38:49.350250 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:38:49.388339 containerd[1570]: time="2026-04-24T00:38:49.388251139Z" level=info msg="CreateContainer within sandbox \"a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\"" Apr 24 00:38:49.454322 containerd[1570]: time="2026-04-24T00:38:49.410061498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-bxjp2,Uid:e0e7964a-87e1-45b9-8f9b-26c7b6d887fe,Namespace:calico-system,Attempt:0,}" Apr 24 00:38:49.455123 containerd[1570]: time="2026-04-24T00:38:49.410086806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-dpk5v,Uid:e790c486-e226-481e-a682-c42b85775d13,Namespace:kube-system,Attempt:0,}" Apr 24 00:38:49.487922 containerd[1570]: time="2026-04-24T00:38:49.485616040Z" level=info msg="StartContainer for \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\"" Apr 24 00:38:50.096466 containerd[1570]: time="2026-04-24T00:38:49.980847755Z" level=info msg="connecting to shim 79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e" address="unix:///run/containerd/s/773931e115e40acf27e471bcb9a64cf410c12b7911761257b06c8ede2160ad5f" protocol=ttrpc version=3 Apr 24 00:38:50.405077 containerd[1570]: time="2026-04-24T00:38:50.403590498Z" level=error msg="Failed to destroy network for sandbox \"c206f3dc1600100e9f9f3fef1f8e7691c5dca5ae31a70bd5fc27f860d6545e58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:50.495491 containerd[1570]: time="2026-04-24T00:38:50.495059340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8bc45cf7-shmzv,Uid:2f2e08c2-9fb6-4155-b7e3-6d9c13e99150,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c206f3dc1600100e9f9f3fef1f8e7691c5dca5ae31a70bd5fc27f860d6545e58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:50.506889 systemd[1]: run-netns-cni\x2d197eed5c\x2d7335\x2d5161\x2de3f9\x2d5ba6dac2081f.mount: Deactivated successfully. Apr 24 00:38:50.508325 kubelet[2842]: E0424 00:38:50.507554 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c206f3dc1600100e9f9f3fef1f8e7691c5dca5ae31a70bd5fc27f860d6545e58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:50.514238 kubelet[2842]: E0424 00:38:50.511703 2842 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c206f3dc1600100e9f9f3fef1f8e7691c5dca5ae31a70bd5fc27f860d6545e58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c8bc45cf7-shmzv" Apr 24 00:38:50.516992 kubelet[2842]: E0424 00:38:50.516148 2842 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c206f3dc1600100e9f9f3fef1f8e7691c5dca5ae31a70bd5fc27f860d6545e58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c8bc45cf7-shmzv" Apr 24 00:38:50.526786 kubelet[2842]: E0424 00:38:50.525848 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c8bc45cf7-shmzv_calico-system(2f2e08c2-9fb6-4155-b7e3-6d9c13e99150)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c8bc45cf7-shmzv_calico-system(2f2e08c2-9fb6-4155-b7e3-6d9c13e99150)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c206f3dc1600100e9f9f3fef1f8e7691c5dca5ae31a70bd5fc27f860d6545e58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c8bc45cf7-shmzv" podUID="2f2e08c2-9fb6-4155-b7e3-6d9c13e99150" Apr 24 00:38:50.655665 systemd[1]: Started cri-containerd-79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e.scope - libcontainer container 79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e. Apr 24 00:38:51.128636 containerd[1570]: time="2026-04-24T00:38:51.128488650Z" level=error msg="Failed to destroy network for sandbox \"4b078fa76752059c4bdec966f81084b4975a1bb0cbc145f41d6ac86ad0fb52b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:51.154525 systemd[1]: run-netns-cni\x2d22ef411f\x2d559b\x2d1726\x2dddc5\x2d8b7259779a46.mount: Deactivated successfully. Apr 24 00:38:51.180215 containerd[1570]: time="2026-04-24T00:38:51.177455411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-z2r4v,Uid:742c480a-2ffe-4ce5-a8d2-bc4cb7575d82,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b078fa76752059c4bdec966f81084b4975a1bb0cbc145f41d6ac86ad0fb52b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:51.180673 kubelet[2842]: E0424 00:38:51.180277 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b078fa76752059c4bdec966f81084b4975a1bb0cbc145f41d6ac86ad0fb52b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:51.181833 kubelet[2842]: E0424 00:38:51.180947 2842 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b078fa76752059c4bdec966f81084b4975a1bb0cbc145f41d6ac86ad0fb52b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-z2r4v" Apr 24 00:38:51.181833 kubelet[2842]: E0424 00:38:51.181022 2842 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b078fa76752059c4bdec966f81084b4975a1bb0cbc145f41d6ac86ad0fb52b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-z2r4v" Apr 24 00:38:51.181833 kubelet[2842]: E0424 00:38:51.181376 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-z2r4v_kube-system(742c480a-2ffe-4ce5-a8d2-bc4cb7575d82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-z2r4v_kube-system(742c480a-2ffe-4ce5-a8d2-bc4cb7575d82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b078fa76752059c4bdec966f81084b4975a1bb0cbc145f41d6ac86ad0fb52b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-z2r4v" podUID="742c480a-2ffe-4ce5-a8d2-bc4cb7575d82" Apr 24 00:38:51.593709 containerd[1570]: time="2026-04-24T00:38:51.593564859Z" level=error msg="Failed to destroy network for sandbox \"f5ad1e2936d941fb755b32bc7642a044021e4953447dcbf76a39794d907ffc62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:51.651635 systemd[1]: run-netns-cni\x2d769e2149\x2d2580\x2db64f\x2df825\x2d0cf8624721ca.mount: Deactivated successfully. Apr 24 00:38:51.687978 containerd[1570]: time="2026-04-24T00:38:51.687062745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66975fdd9d-5jt89,Uid:40de480d-0268-4354-82c2-131a37e9e48c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ad1e2936d941fb755b32bc7642a044021e4953447dcbf76a39794d907ffc62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:51.719259 kubelet[2842]: E0424 00:38:51.718560 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ad1e2936d941fb755b32bc7642a044021e4953447dcbf76a39794d907ffc62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:51.773065 kubelet[2842]: E0424 00:38:51.768375 2842 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ad1e2936d941fb755b32bc7642a044021e4953447dcbf76a39794d907ffc62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-66975fdd9d-5jt89" Apr 24 00:38:51.773065 kubelet[2842]: E0424 00:38:51.771238 2842 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ad1e2936d941fb755b32bc7642a044021e4953447dcbf76a39794d907ffc62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-66975fdd9d-5jt89" Apr 24 00:38:51.778613 kubelet[2842]: E0424 00:38:51.775591 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66975fdd9d-5jt89_calico-system(40de480d-0268-4354-82c2-131a37e9e48c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66975fdd9d-5jt89_calico-system(40de480d-0268-4354-82c2-131a37e9e48c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5ad1e2936d941fb755b32bc7642a044021e4953447dcbf76a39794d907ffc62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-66975fdd9d-5jt89" podUID="40de480d-0268-4354-82c2-131a37e9e48c" Apr 24 00:38:52.756675 containerd[1570]: time="2026-04-24T00:38:52.756250678Z" level=error msg="Failed to destroy network for sandbox \"1adda2d6827244289842f3e45e020e00ee5cecec88a2ebc3de5d106f11900faf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:52.800124 systemd[1]: run-netns-cni\x2ded41dfc6\x2d5dbd\x2d335b\x2dad37\x2d0f5baf304767.mount: Deactivated successfully. Apr 24 00:38:52.828616 containerd[1570]: time="2026-04-24T00:38:52.827629987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-bxjp2,Uid:e0e7964a-87e1-45b9-8f9b-26c7b6d887fe,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1adda2d6827244289842f3e45e020e00ee5cecec88a2ebc3de5d106f11900faf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:52.862554 containerd[1570]: time="2026-04-24T00:38:52.862345988Z" level=info msg="StartContainer for \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" returns successfully" Apr 24 00:38:52.904753 kubelet[2842]: E0424 00:38:52.887752 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1adda2d6827244289842f3e45e020e00ee5cecec88a2ebc3de5d106f11900faf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:52.974587 containerd[1570]: time="2026-04-24T00:38:52.974242229Z" level=error msg="Failed to destroy network for sandbox \"c225a1d2e42b6461c5d97e120edd5ee3451b013230aec2feeca72b7d0beee3bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:52.980063 kubelet[2842]: E0424 00:38:52.968557 2842 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1adda2d6827244289842f3e45e020e00ee5cecec88a2ebc3de5d106f11900faf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7fb6cdc5d9-bxjp2" Apr 24 00:38:52.988731 containerd[1570]: time="2026-04-24T00:38:52.988178247Z" level=error msg="Failed to destroy network for sandbox \"0e9aa737bccaf324c6e857d355f015f5b6ebe231a972f777e0e8068fd443c48b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:53.000787 containerd[1570]: time="2026-04-24T00:38:53.000516821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57995fc485-nczn8,Uid:8fde9b4d-ddd2-4cff-90fd-c8ea6539628b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c225a1d2e42b6461c5d97e120edd5ee3451b013230aec2feeca72b7d0beee3bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:53.019672 kubelet[2842]: E0424 00:38:52.980666 2842 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1adda2d6827244289842f3e45e020e00ee5cecec88a2ebc3de5d106f11900faf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7fb6cdc5d9-bxjp2" Apr 24 00:38:53.020497 containerd[1570]: time="2026-04-24T00:38:53.020072396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-dpk5v,Uid:e790c486-e226-481e-a682-c42b85775d13,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e9aa737bccaf324c6e857d355f015f5b6ebe231a972f777e0e8068fd443c48b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:53.056320 systemd[1]: run-netns-cni\x2d3d813719\x2d8897\x2d447a\x2d02ae\x2df17fb08bda7e.mount: Deactivated successfully. Apr 24 00:38:53.065191 systemd[1]: run-netns-cni\x2d65485e8b\x2d3f89\x2d056c\x2d1194\x2d86a964591d93.mount: Deactivated successfully. Apr 24 00:38:53.086902 kubelet[2842]: E0424 00:38:53.077364 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7fb6cdc5d9-bxjp2_calico-system(e0e7964a-87e1-45b9-8f9b-26c7b6d887fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7fb6cdc5d9-bxjp2_calico-system(e0e7964a-87e1-45b9-8f9b-26c7b6d887fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1adda2d6827244289842f3e45e020e00ee5cecec88a2ebc3de5d106f11900faf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7fb6cdc5d9-bxjp2" podUID="e0e7964a-87e1-45b9-8f9b-26c7b6d887fe" Apr 24 00:38:53.101214 kubelet[2842]: E0424 00:38:53.100718 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c225a1d2e42b6461c5d97e120edd5ee3451b013230aec2feeca72b7d0beee3bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:53.103427 kubelet[2842]: E0424 00:38:53.098718 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e9aa737bccaf324c6e857d355f015f5b6ebe231a972f777e0e8068fd443c48b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:53.103772 kubelet[2842]: E0424 00:38:53.103529 2842 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e9aa737bccaf324c6e857d355f015f5b6ebe231a972f777e0e8068fd443c48b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-dpk5v" Apr 24 00:38:53.105146 kubelet[2842]: E0424 00:38:53.104234 2842 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e9aa737bccaf324c6e857d355f015f5b6ebe231a972f777e0e8068fd443c48b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-dpk5v" Apr 24 00:38:53.109016 kubelet[2842]: E0424 00:38:53.106756 2842 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c225a1d2e42b6461c5d97e120edd5ee3451b013230aec2feeca72b7d0beee3bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57995fc485-nczn8" Apr 24 00:38:53.117761 kubelet[2842]: E0424 00:38:53.112833 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-dpk5v_kube-system(e790c486-e226-481e-a682-c42b85775d13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-dpk5v_kube-system(e790c486-e226-481e-a682-c42b85775d13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e9aa737bccaf324c6e857d355f015f5b6ebe231a972f777e0e8068fd443c48b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-dpk5v" podUID="e790c486-e226-481e-a682-c42b85775d13" Apr 24 00:38:53.176033 kubelet[2842]: E0424 00:38:53.110377 2842 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c225a1d2e42b6461c5d97e120edd5ee3451b013230aec2feeca72b7d0beee3bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57995fc485-nczn8" Apr 24 00:38:53.178020 kubelet[2842]: E0424 00:38:53.176554 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57995fc485-nczn8_calico-system(8fde9b4d-ddd2-4cff-90fd-c8ea6539628b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57995fc485-nczn8_calico-system(8fde9b4d-ddd2-4cff-90fd-c8ea6539628b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c225a1d2e42b6461c5d97e120edd5ee3451b013230aec2feeca72b7d0beee3bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57995fc485-nczn8" podUID="8fde9b4d-ddd2-4cff-90fd-c8ea6539628b" Apr 24 00:38:53.227538 containerd[1570]: time="2026-04-24T00:38:53.225541817Z" level=error msg="Failed to destroy network for sandbox \"f2948319f62cb002857a6ff5dbe52d7400bac64c41420bd0a30dad04ae79963b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:53.281316 containerd[1570]: time="2026-04-24T00:38:53.277628561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66975fdd9d-7fq42,Uid:093ff803-c52d-48d6-b99b-3a99f012e7a9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2948319f62cb002857a6ff5dbe52d7400bac64c41420bd0a30dad04ae79963b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:53.297557 systemd[1]: run-netns-cni\x2d22700d7c\x2d847d\x2d4d7c\x2d162a\x2d41431b938c89.mount: Deactivated successfully. Apr 24 00:38:53.297787 kubelet[2842]: E0424 00:38:53.297578 2842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2948319f62cb002857a6ff5dbe52d7400bac64c41420bd0a30dad04ae79963b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:38:53.317899 kubelet[2842]: E0424 00:38:53.300196 2842 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2948319f62cb002857a6ff5dbe52d7400bac64c41420bd0a30dad04ae79963b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-66975fdd9d-7fq42" Apr 24 00:38:53.318414 kubelet[2842]: E0424 00:38:53.318152 2842 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2948319f62cb002857a6ff5dbe52d7400bac64c41420bd0a30dad04ae79963b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-66975fdd9d-7fq42" Apr 24 00:38:53.318817 kubelet[2842]: E0424 00:38:53.318519 2842 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66975fdd9d-7fq42_calico-system(093ff803-c52d-48d6-b99b-3a99f012e7a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66975fdd9d-7fq42_calico-system(093ff803-c52d-48d6-b99b-3a99f012e7a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2948319f62cb002857a6ff5dbe52d7400bac64c41420bd0a30dad04ae79963b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-66975fdd9d-7fq42" podUID="093ff803-c52d-48d6-b99b-3a99f012e7a9" Apr 24 00:38:55.356253 kubelet[2842]: I0424 00:38:55.350387 2842 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-249n5" podStartSLOduration=10.643395583 podStartE2EDuration="1m33.350299963s" podCreationTimestamp="2026-04-24 00:37:22 +0000 UTC" firstStartedPulling="2026-04-24 00:37:24.885685263 +0000 UTC m=+36.273155720" lastFinishedPulling="2026-04-24 00:38:47.592589645 +0000 UTC m=+118.980060100" observedRunningTime="2026-04-24 00:38:55.28252311 +0000 UTC m=+126.669993581" watchObservedRunningTime="2026-04-24 00:38:55.350299963 +0000 UTC m=+126.737770434" Apr 24 00:39:01.316891 kubelet[2842]: E0424 00:39:01.316235 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:39:01.387288 kubelet[2842]: I0424 00:39:01.386556 2842 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-nginx-config\" (UniqueName: \"kubernetes.io/configmap/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-nginx-config\") pod \"8fde9b4d-ddd2-4cff-90fd-c8ea6539628b\" (UID: \"8fde9b4d-ddd2-4cff-90fd-c8ea6539628b\") " Apr 24 00:39:01.387815 kubelet[2842]: I0424 00:39:01.387683 2842 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-whisker-ca-bundle\") pod \"8fde9b4d-ddd2-4cff-90fd-c8ea6539628b\" (UID: \"8fde9b4d-ddd2-4cff-90fd-c8ea6539628b\") " Apr 24 00:39:01.387851 kubelet[2842]: I0424 00:39:01.387818 2842 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-whisker-backend-key-pair\") pod \"8fde9b4d-ddd2-4cff-90fd-c8ea6539628b\" (UID: \"8fde9b4d-ddd2-4cff-90fd-c8ea6539628b\") " Apr 24 00:39:01.388440 kubelet[2842]: I0424 00:39:01.387942 2842 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-kube-api-access-66dmt\" (UniqueName: \"kubernetes.io/projected/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-kube-api-access-66dmt\") pod \"8fde9b4d-ddd2-4cff-90fd-c8ea6539628b\" (UID: \"8fde9b4d-ddd2-4cff-90fd-c8ea6539628b\") " Apr 24 00:39:01.393989 kubelet[2842]: I0424 00:39:01.391785 2842 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-nginx-config" pod "8fde9b4d-ddd2-4cff-90fd-c8ea6539628b" (UID: "8fde9b4d-ddd2-4cff-90fd-c8ea6539628b"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 00:39:01.439004 kubelet[2842]: I0424 00:39:01.428770 2842 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-whisker-ca-bundle" pod "8fde9b4d-ddd2-4cff-90fd-c8ea6539628b" (UID: "8fde9b4d-ddd2-4cff-90fd-c8ea6539628b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 00:39:01.460825 systemd[1]: var-lib-kubelet-pods-8fde9b4d\x2dddd2\x2d4cff\x2d90fd\x2dc8ea6539628b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 00:39:01.513339 kubelet[2842]: I0424 00:39:01.511638 2842 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 24 00:39:01.513339 kubelet[2842]: I0424 00:39:01.511738 2842 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 24 00:39:01.512407 systemd[1]: var-lib-kubelet-pods-8fde9b4d\x2dddd2\x2d4cff\x2d90fd\x2dc8ea6539628b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d66dmt.mount: Deactivated successfully. Apr 24 00:39:01.519453 kubelet[2842]: I0424 00:39:01.519340 2842 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-whisker-backend-key-pair" pod "8fde9b4d-ddd2-4cff-90fd-c8ea6539628b" (UID: "8fde9b4d-ddd2-4cff-90fd-c8ea6539628b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 00:39:01.524165 kubelet[2842]: I0424 00:39:01.523179 2842 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-kube-api-access-66dmt" pod "8fde9b4d-ddd2-4cff-90fd-c8ea6539628b" (UID: "8fde9b4d-ddd2-4cff-90fd-c8ea6539628b"). InnerVolumeSpecName "kube-api-access-66dmt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 00:39:01.641260 kubelet[2842]: I0424 00:39:01.640620 2842 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 24 00:39:01.641260 kubelet[2842]: I0424 00:39:01.641252 2842 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-66dmt\" (UniqueName: \"kubernetes.io/projected/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b-kube-api-access-66dmt\") on node \"localhost\" DevicePath \"\"" Apr 24 00:39:02.152647 containerd[1570]: time="2026-04-24T00:39:02.151399363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p4jvm,Uid:4753596c-a0a7-4611-9d92-e3a14065926b,Namespace:calico-system,Attempt:0,}" Apr 24 00:39:02.520726 systemd[1]: Removed slice kubepods-besteffort-pod8fde9b4d_ddd2_4cff_90fd_c8ea6539628b.slice - libcontainer container kubepods-besteffort-pod8fde9b4d_ddd2_4cff_90fd_c8ea6539628b.slice. Apr 24 00:39:03.856052 kubelet[2842]: E0424 00:39:03.829797 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:39:03.884925 containerd[1570]: time="2026-04-24T00:39:03.884788266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8bc45cf7-shmzv,Uid:2f2e08c2-9fb6-4155-b7e3-6d9c13e99150,Namespace:calico-system,Attempt:0,}" Apr 24 00:39:04.027399 containerd[1570]: time="2026-04-24T00:39:04.020841437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-z2r4v,Uid:742c480a-2ffe-4ce5-a8d2-bc4cb7575d82,Namespace:kube-system,Attempt:0,}" Apr 24 00:39:04.896089 containerd[1570]: time="2026-04-24T00:39:04.895813994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66975fdd9d-5jt89,Uid:40de480d-0268-4354-82c2-131a37e9e48c,Namespace:calico-system,Attempt:0,}" Apr 24 00:39:05.445239 kubelet[2842]: I0424 00:39:05.444645 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgc8r\" (UniqueName: \"kubernetes.io/projected/3f9c4c02-5b14-4573-ac3f-1c1e4e84f33a-kube-api-access-lgc8r\") pod \"whisker-596bbfbf67-qp7wt\" (UID: \"3f9c4c02-5b14-4573-ac3f-1c1e4e84f33a\") " pod="calico-system/whisker-596bbfbf67-qp7wt" Apr 24 00:39:05.445239 kubelet[2842]: I0424 00:39:05.444753 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f9c4c02-5b14-4573-ac3f-1c1e4e84f33a-whisker-backend-key-pair\") pod \"whisker-596bbfbf67-qp7wt\" (UID: \"3f9c4c02-5b14-4573-ac3f-1c1e4e84f33a\") " pod="calico-system/whisker-596bbfbf67-qp7wt" Apr 24 00:39:05.516152 kubelet[2842]: I0424 00:39:05.515567 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3f9c4c02-5b14-4573-ac3f-1c1e4e84f33a-nginx-config\") pod \"whisker-596bbfbf67-qp7wt\" (UID: \"3f9c4c02-5b14-4573-ac3f-1c1e4e84f33a\") " pod="calico-system/whisker-596bbfbf67-qp7wt" Apr 24 00:39:05.534731 kubelet[2842]: I0424 00:39:05.533342 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f9c4c02-5b14-4573-ac3f-1c1e4e84f33a-whisker-ca-bundle\") pod \"whisker-596bbfbf67-qp7wt\" (UID: \"3f9c4c02-5b14-4573-ac3f-1c1e4e84f33a\") " pod="calico-system/whisker-596bbfbf67-qp7wt" Apr 24 00:39:05.990208 kubelet[2842]: I0424 00:39:05.988332 2842 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8fde9b4d-ddd2-4cff-90fd-c8ea6539628b" path="/var/lib/kubelet/pods/8fde9b4d-ddd2-4cff-90fd-c8ea6539628b/volumes" Apr 24 00:39:06.108808 systemd[1]: Created slice kubepods-besteffort-pod3f9c4c02_5b14_4573_ac3f_1c1e4e84f33a.slice - libcontainer container kubepods-besteffort-pod3f9c4c02_5b14_4573_ac3f_1c1e4e84f33a.slice. Apr 24 00:39:07.647394 containerd[1570]: time="2026-04-24T00:39:07.647329833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596bbfbf67-qp7wt,Uid:3f9c4c02-5b14-4573-ac3f-1c1e4e84f33a,Namespace:calico-system,Attempt:0,}" Apr 24 00:39:08.186992 kubelet[2842]: E0424 00:39:08.185158 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:39:08.209432 containerd[1570]: time="2026-04-24T00:39:08.208775071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-dpk5v,Uid:e790c486-e226-481e-a682-c42b85775d13,Namespace:kube-system,Attempt:0,}" Apr 24 00:39:08.209757 containerd[1570]: time="2026-04-24T00:39:08.209502223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-bxjp2,Uid:e0e7964a-87e1-45b9-8f9b-26c7b6d887fe,Namespace:calico-system,Attempt:0,}" Apr 24 00:39:09.817009 containerd[1570]: time="2026-04-24T00:39:09.816702628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66975fdd9d-7fq42,Uid:093ff803-c52d-48d6-b99b-3a99f012e7a9,Namespace:calico-system,Attempt:0,}" Apr 24 00:39:16.564615 systemd-networkd[1483]: calif7c7083a083: Link UP Apr 24 00:39:16.566325 systemd-networkd[1483]: calif7c7083a083: Gained carrier Apr 24 00:39:16.704811 containerd[1570]: 2026-04-24 00:39:03.205 [ERROR][4093] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 00:39:16.704811 containerd[1570]: 2026-04-24 00:39:05.296 [INFO][4093] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--p4jvm-eth0 csi-node-driver- calico-system 4753596c-a0a7-4611-9d92-e3a14065926b 791 0 2026-04-24 00:37:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6986d7597d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-p4jvm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif7c7083a083 [] [] }} ContainerID="0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" Namespace="calico-system" Pod="csi-node-driver-p4jvm" WorkloadEndpoint="localhost-k8s-csi--node--driver--p4jvm-" Apr 24 00:39:16.704811 containerd[1570]: 2026-04-24 00:39:05.299 [INFO][4093] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" Namespace="calico-system" Pod="csi-node-driver-p4jvm" WorkloadEndpoint="localhost-k8s-csi--node--driver--p4jvm-eth0" Apr 24 00:39:16.704811 containerd[1570]: 2026-04-24 00:39:10.370 [INFO][4141] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" HandleID="k8s-pod-network.0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" Workload="localhost-k8s-csi--node--driver--p4jvm-eth0" Apr 24 00:39:16.711492 containerd[1570]: 2026-04-24 00:39:11.832 [INFO][4141] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" HandleID="k8s-pod-network.0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" Workload="localhost-k8s-csi--node--driver--p4jvm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000199240), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-p4jvm", "timestamp":"2026-04-24 00:39:10.370141845 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000154840)} Apr 24 00:39:16.711492 containerd[1570]: 2026-04-24 00:39:11.849 [INFO][4141] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:39:16.711492 containerd[1570]: 2026-04-24 00:39:11.859 [INFO][4141] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:39:16.711492 containerd[1570]: 2026-04-24 00:39:11.860 [INFO][4141] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 00:39:16.711492 containerd[1570]: 2026-04-24 00:39:12.184 [INFO][4141] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" host="localhost" Apr 24 00:39:16.711492 containerd[1570]: 2026-04-24 00:39:13.091 [INFO][4141] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 00:39:16.711492 containerd[1570]: 2026-04-24 00:39:13.641 [INFO][4141] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 00:39:16.711492 containerd[1570]: 2026-04-24 00:39:14.064 [INFO][4141] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:16.711492 containerd[1570]: 2026-04-24 00:39:14.168 [INFO][4141] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:16.711492 containerd[1570]: 2026-04-24 00:39:14.663 [INFO][4141] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:16.711492 containerd[1570]: 2026-04-24 00:39:14.734 [INFO][4141] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:16.711785 containerd[1570]: 2026-04-24 00:39:14.734 [INFO][4141] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="localhost" subnet=192.168.88.128/26 Apr 24 00:39:16.711785 containerd[1570]: 2026-04-24 00:39:14.797 [INFO][4141] ipam/ipam_block_reader_writer.go 231: The block already exists, getting it from data store affinityType="host" host="localhost" subnet=192.168.88.128/26 Apr 24 00:39:16.711785 containerd[1570]: 2026-04-24 00:39:14.858 [INFO][4141] ipam/ipam_block_reader_writer.go 247: Block is already claimed by this host, confirm the affinity affinityType="host" host="localhost" subnet=192.168.88.128/26 Apr 24 00:39:16.711785 containerd[1570]: 2026-04-24 00:39:14.859 [INFO][4141] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="localhost" subnet=192.168.88.128/26 Apr 24 00:39:16.711785 containerd[1570]: 2026-04-24 00:39:15.061 [INFO][4141] ipam/ipam_block_reader_writer.go 292: Affinity is already confirmed host="localhost" subnet=192.168.88.128/26 Apr 24 00:39:16.711785 containerd[1570]: 2026-04-24 00:39:15.065 [INFO][4141] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" host="localhost" Apr 24 00:39:16.711785 containerd[1570]: 2026-04-24 00:39:15.197 [INFO][4141] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816 Apr 24 00:39:16.711785 containerd[1570]: 2026-04-24 00:39:15.289 [INFO][4141] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" host="localhost" Apr 24 00:39:16.711785 containerd[1570]: 2026-04-24 00:39:15.407 [INFO][4141] ipam/ipam.go 1276: Failed to update block block=192.168.88.128/26 error=update conflict: IPAMBlock(192-168-88-128-26) handle="k8s-pod-network.0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" host="localhost" Apr 24 00:39:16.711785 containerd[1570]: 2026-04-24 00:39:15.860 [INFO][4141] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" host="localhost" Apr 24 00:39:16.712806 containerd[1570]: 2026-04-24 00:39:15.959 [INFO][4141] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816 Apr 24 00:39:16.712806 containerd[1570]: 2026-04-24 00:39:16.076 [INFO][4141] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" host="localhost" Apr 24 00:39:16.712806 containerd[1570]: 2026-04-24 00:39:16.366 [INFO][4141] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" host="localhost" Apr 24 00:39:16.712806 containerd[1570]: 2026-04-24 00:39:16.396 [INFO][4141] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" host="localhost" Apr 24 00:39:16.712806 containerd[1570]: 2026-04-24 00:39:16.402 [INFO][4141] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:39:16.712806 containerd[1570]: 2026-04-24 00:39:16.406 [INFO][4141] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" HandleID="k8s-pod-network.0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" Workload="localhost-k8s-csi--node--driver--p4jvm-eth0" Apr 24 00:39:16.712965 containerd[1570]: 2026-04-24 00:39:16.471 [INFO][4093] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" Namespace="calico-system" Pod="csi-node-driver-p4jvm" WorkloadEndpoint="localhost-k8s-csi--node--driver--p4jvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p4jvm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4753596c-a0a7-4611-9d92-e3a14065926b", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 37, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6986d7597d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-p4jvm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7c7083a083", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:16.713227 containerd[1570]: 2026-04-24 00:39:16.482 [INFO][4093] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" Namespace="calico-system" Pod="csi-node-driver-p4jvm" WorkloadEndpoint="localhost-k8s-csi--node--driver--p4jvm-eth0" Apr 24 00:39:16.713227 containerd[1570]: 2026-04-24 00:39:16.483 [INFO][4093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7c7083a083 ContainerID="0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" Namespace="calico-system" Pod="csi-node-driver-p4jvm" WorkloadEndpoint="localhost-k8s-csi--node--driver--p4jvm-eth0" Apr 24 00:39:16.713227 containerd[1570]: 2026-04-24 00:39:16.565 [INFO][4093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" Namespace="calico-system" Pod="csi-node-driver-p4jvm" WorkloadEndpoint="localhost-k8s-csi--node--driver--p4jvm-eth0" Apr 24 00:39:16.714886 containerd[1570]: 2026-04-24 00:39:16.568 [INFO][4093] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" Namespace="calico-system" Pod="csi-node-driver-p4jvm" WorkloadEndpoint="localhost-k8s-csi--node--driver--p4jvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p4jvm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4753596c-a0a7-4611-9d92-e3a14065926b", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 37, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6986d7597d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816", Pod:"csi-node-driver-p4jvm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7c7083a083", MAC:"fa:93:1a:ab:be:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:16.715061 containerd[1570]: 2026-04-24 00:39:16.686 [INFO][4093] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" Namespace="calico-system" Pod="csi-node-driver-p4jvm" WorkloadEndpoint="localhost-k8s-csi--node--driver--p4jvm-eth0" Apr 24 00:39:17.469050 containerd[1570]: time="2026-04-24T00:39:17.410684824Z" level=info msg="connecting to shim 0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816" address="unix:///run/containerd/s/678d6184270d1e8aab3dab3be45df04d7986d06899242228fb46f2039e9dbdc8" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:39:18.168451 systemd[1]: Started cri-containerd-0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816.scope - libcontainer container 0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816. Apr 24 00:39:18.460250 systemd-networkd[1483]: calif7c7083a083: Gained IPv6LL Apr 24 00:39:18.901290 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 00:39:19.501292 systemd-networkd[1483]: cali6f6c2af40cf: Link UP Apr 24 00:39:19.544963 containerd[1570]: time="2026-04-24T00:39:19.543846033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p4jvm,Uid:4753596c-a0a7-4611-9d92-e3a14065926b,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816\"" Apr 24 00:39:19.545029 systemd-networkd[1483]: cali6f6c2af40cf: Gained carrier Apr 24 00:39:19.586595 containerd[1570]: time="2026-04-24T00:39:19.583536252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\"" Apr 24 00:39:19.820962 containerd[1570]: 2026-04-24 00:39:09.998 [ERROR][4152] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 00:39:19.820962 containerd[1570]: 2026-04-24 00:39:10.387 [INFO][4152] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--596bbfbf67--qp7wt-eth0 whisker-596bbfbf67- calico-system 3f9c4c02-5b14-4573-ac3f-1c1e4e84f33a 1148 0 2026-04-24 00:39:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:596bbfbf67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-596bbfbf67-qp7wt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6f6c2af40cf [] [] }} ContainerID="ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" Namespace="calico-system" Pod="whisker-596bbfbf67-qp7wt" WorkloadEndpoint="localhost-k8s-whisker--596bbfbf67--qp7wt-" Apr 24 00:39:19.820962 containerd[1570]: 2026-04-24 00:39:10.387 [INFO][4152] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" Namespace="calico-system" Pod="whisker-596bbfbf67-qp7wt" WorkloadEndpoint="localhost-k8s-whisker--596bbfbf67--qp7wt-eth0" Apr 24 00:39:19.820962 containerd[1570]: 2026-04-24 00:39:14.421 [INFO][4225] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" HandleID="k8s-pod-network.ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" Workload="localhost-k8s-whisker--596bbfbf67--qp7wt-eth0" Apr 24 00:39:19.823827 containerd[1570]: 2026-04-24 00:39:14.733 [INFO][4225] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" HandleID="k8s-pod-network.ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" Workload="localhost-k8s-whisker--596bbfbf67--qp7wt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ee400), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-596bbfbf67-qp7wt", "timestamp":"2026-04-24 00:39:14.421476463 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000214580)} Apr 24 00:39:19.823827 containerd[1570]: 2026-04-24 00:39:14.734 [INFO][4225] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:39:19.823827 containerd[1570]: 2026-04-24 00:39:16.404 [INFO][4225] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:39:19.823827 containerd[1570]: 2026-04-24 00:39:16.413 [INFO][4225] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 00:39:19.823827 containerd[1570]: 2026-04-24 00:39:16.596 [INFO][4225] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" host="localhost" Apr 24 00:39:19.823827 containerd[1570]: 2026-04-24 00:39:16.796 [INFO][4225] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 00:39:19.823827 containerd[1570]: 2026-04-24 00:39:17.350 [INFO][4225] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 00:39:19.823827 containerd[1570]: 2026-04-24 00:39:17.560 [INFO][4225] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:19.823827 containerd[1570]: 2026-04-24 00:39:17.963 [INFO][4225] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:19.823827 containerd[1570]: 2026-04-24 00:39:17.973 [INFO][4225] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" host="localhost" Apr 24 00:39:19.857463 containerd[1570]: 2026-04-24 00:39:18.151 [INFO][4225] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2 Apr 24 00:39:19.857463 containerd[1570]: 2026-04-24 00:39:18.594 [INFO][4225] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" host="localhost" Apr 24 00:39:19.857463 containerd[1570]: 2026-04-24 00:39:18.916 [INFO][4225] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" host="localhost" Apr 24 00:39:19.857463 containerd[1570]: 2026-04-24 00:39:18.950 [INFO][4225] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" host="localhost" Apr 24 00:39:19.857463 containerd[1570]: 2026-04-24 00:39:18.953 [INFO][4225] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:39:19.857463 containerd[1570]: 2026-04-24 00:39:18.966 [INFO][4225] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" HandleID="k8s-pod-network.ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" Workload="localhost-k8s-whisker--596bbfbf67--qp7wt-eth0" Apr 24 00:39:19.857665 containerd[1570]: 2026-04-24 00:39:19.411 [INFO][4152] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" Namespace="calico-system" Pod="whisker-596bbfbf67-qp7wt" WorkloadEndpoint="localhost-k8s-whisker--596bbfbf67--qp7wt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--596bbfbf67--qp7wt-eth0", GenerateName:"whisker-596bbfbf67-", Namespace:"calico-system", SelfLink:"", UID:"3f9c4c02-5b14-4573-ac3f-1c1e4e84f33a", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 39, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"596bbfbf67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-596bbfbf67-qp7wt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6f6c2af40cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:19.857665 containerd[1570]: 2026-04-24 00:39:19.426 [INFO][4152] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" Namespace="calico-system" Pod="whisker-596bbfbf67-qp7wt" WorkloadEndpoint="localhost-k8s-whisker--596bbfbf67--qp7wt-eth0" Apr 24 00:39:19.865584 containerd[1570]: 2026-04-24 00:39:19.469 [INFO][4152] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f6c2af40cf ContainerID="ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" Namespace="calico-system" Pod="whisker-596bbfbf67-qp7wt" WorkloadEndpoint="localhost-k8s-whisker--596bbfbf67--qp7wt-eth0" Apr 24 00:39:19.865584 containerd[1570]: 2026-04-24 00:39:19.551 [INFO][4152] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" Namespace="calico-system" Pod="whisker-596bbfbf67-qp7wt" WorkloadEndpoint="localhost-k8s-whisker--596bbfbf67--qp7wt-eth0" Apr 24 00:39:19.865694 containerd[1570]: 2026-04-24 00:39:19.561 [INFO][4152] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" Namespace="calico-system" Pod="whisker-596bbfbf67-qp7wt" WorkloadEndpoint="localhost-k8s-whisker--596bbfbf67--qp7wt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--596bbfbf67--qp7wt-eth0", GenerateName:"whisker-596bbfbf67-", Namespace:"calico-system", SelfLink:"", UID:"3f9c4c02-5b14-4573-ac3f-1c1e4e84f33a", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 39, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"596bbfbf67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2", Pod:"whisker-596bbfbf67-qp7wt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6f6c2af40cf", MAC:"fe:3c:9f:d6:fd:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:19.870487 containerd[1570]: 2026-04-24 00:39:19.814 [INFO][4152] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" Namespace="calico-system" Pod="whisker-596bbfbf67-qp7wt" WorkloadEndpoint="localhost-k8s-whisker--596bbfbf67--qp7wt-eth0" Apr 24 00:39:20.067795 containerd[1570]: time="2026-04-24T00:39:20.067663017Z" level=info msg="connecting to shim ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2" address="unix:///run/containerd/s/95f41cbc422e0865a84a4f6b033a0be94655562d7e52912d8de954c0a85239bd" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:39:20.359286 systemd[1]: Started cri-containerd-ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2.scope - libcontainer container ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2. Apr 24 00:39:20.553703 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 00:39:20.606338 systemd-networkd[1483]: cali6f6c2af40cf: Gained IPv6LL Apr 24 00:39:20.884443 containerd[1570]: time="2026-04-24T00:39:20.884096165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596bbfbf67-qp7wt,Uid:3f9c4c02-5b14-4573-ac3f-1c1e4e84f33a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2\"" Apr 24 00:39:21.234565 systemd-networkd[1483]: califbd8b9e02a4: Link UP Apr 24 00:39:21.243921 systemd-networkd[1483]: califbd8b9e02a4: Gained carrier Apr 24 00:39:21.808662 containerd[1570]: 2026-04-24 00:39:09.971 [ERROR][4128] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 00:39:21.808662 containerd[1570]: 2026-04-24 00:39:10.759 [INFO][4128] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66975fdd9d--5jt89-eth0 calico-apiserver-66975fdd9d- calico-system 40de480d-0268-4354-82c2-131a37e9e48c 1080 0 2026-04-24 00:37:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66975fdd9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66975fdd9d-5jt89 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] califbd8b9e02a4 [] [] }} ContainerID="03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-5jt89" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--5jt89-" Apr 24 00:39:21.808662 containerd[1570]: 2026-04-24 00:39:10.770 [INFO][4128] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-5jt89" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--5jt89-eth0" Apr 24 00:39:21.808662 containerd[1570]: 2026-04-24 00:39:14.772 [INFO][4237] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" HandleID="k8s-pod-network.03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" Workload="localhost-k8s-calico--apiserver--66975fdd9d--5jt89-eth0" Apr 24 00:39:21.822359 containerd[1570]: 2026-04-24 00:39:14.800 [INFO][4237] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" HandleID="k8s-pod-network.03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" Workload="localhost-k8s-calico--apiserver--66975fdd9d--5jt89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006564a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-66975fdd9d-5jt89", "timestamp":"2026-04-24 00:39:14.772902059 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001f34a0)} Apr 24 00:39:21.822359 containerd[1570]: 2026-04-24 00:39:14.801 [INFO][4237] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:39:21.822359 containerd[1570]: 2026-04-24 00:39:18.952 [INFO][4237] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:39:21.822359 containerd[1570]: 2026-04-24 00:39:18.976 [INFO][4237] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 00:39:21.822359 containerd[1570]: 2026-04-24 00:39:19.498 [INFO][4237] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" host="localhost" Apr 24 00:39:21.822359 containerd[1570]: 2026-04-24 00:39:19.794 [INFO][4237] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 00:39:21.822359 containerd[1570]: 2026-04-24 00:39:20.009 [INFO][4237] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 00:39:21.822359 containerd[1570]: 2026-04-24 00:39:20.183 [INFO][4237] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:21.822359 containerd[1570]: 2026-04-24 00:39:20.369 [INFO][4237] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:21.822359 containerd[1570]: 2026-04-24 00:39:20.370 [INFO][4237] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" host="localhost" Apr 24 00:39:21.825476 containerd[1570]: 2026-04-24 00:39:20.499 [INFO][4237] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653 Apr 24 00:39:21.825476 containerd[1570]: 2026-04-24 00:39:20.576 [INFO][4237] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" host="localhost" Apr 24 00:39:21.825476 containerd[1570]: 2026-04-24 00:39:20.783 [INFO][4237] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" host="localhost" Apr 24 00:39:21.825476 containerd[1570]: 2026-04-24 00:39:20.783 [INFO][4237] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" host="localhost" Apr 24 00:39:21.825476 containerd[1570]: 2026-04-24 00:39:20.784 [INFO][4237] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:39:21.825476 containerd[1570]: 2026-04-24 00:39:20.784 [INFO][4237] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" HandleID="k8s-pod-network.03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" Workload="localhost-k8s-calico--apiserver--66975fdd9d--5jt89-eth0" Apr 24 00:39:21.826244 containerd[1570]: 2026-04-24 00:39:20.866 [INFO][4128] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-5jt89" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--5jt89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66975fdd9d--5jt89-eth0", GenerateName:"calico-apiserver-66975fdd9d-", Namespace:"calico-system", SelfLink:"", UID:"40de480d-0268-4354-82c2-131a37e9e48c", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66975fdd9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66975fdd9d-5jt89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califbd8b9e02a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:21.829538 containerd[1570]: 2026-04-24 00:39:20.872 [INFO][4128] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-5jt89" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--5jt89-eth0" Apr 24 00:39:21.829538 containerd[1570]: 2026-04-24 00:39:20.874 [INFO][4128] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbd8b9e02a4 ContainerID="03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-5jt89" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--5jt89-eth0" Apr 24 00:39:21.829538 containerd[1570]: 2026-04-24 00:39:21.250 [INFO][4128] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-5jt89" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--5jt89-eth0" Apr 24 00:39:21.829672 containerd[1570]: 2026-04-24 00:39:21.269 [INFO][4128] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-5jt89" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--5jt89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66975fdd9d--5jt89-eth0", GenerateName:"calico-apiserver-66975fdd9d-", Namespace:"calico-system", SelfLink:"", UID:"40de480d-0268-4354-82c2-131a37e9e48c", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66975fdd9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653", Pod:"calico-apiserver-66975fdd9d-5jt89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califbd8b9e02a4", MAC:"72:12:d1:07:22:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:21.882103 containerd[1570]: 2026-04-24 00:39:21.741 [INFO][4128] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-5jt89" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--5jt89-eth0" Apr 24 00:39:22.189304 containerd[1570]: time="2026-04-24T00:39:22.183730958Z" level=info msg="connecting to shim 03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653" address="unix:///run/containerd/s/f2dd0e7d091a6a06381be00e2c3171a755123fca63a5d7ebc83068e3db98af21" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:39:22.442152 systemd[1]: Started cri-containerd-03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653.scope - libcontainer container 03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653. Apr 24 00:39:22.847048 systemd-networkd[1483]: califbd8b9e02a4: Gained IPv6LL Apr 24 00:39:22.864028 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 00:39:23.139912 systemd-networkd[1483]: cali2fe43aa54bf: Link UP Apr 24 00:39:23.142177 systemd-networkd[1483]: cali2fe43aa54bf: Gained carrier Apr 24 00:39:23.336976 containerd[1570]: time="2026-04-24T00:39:23.334654395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66975fdd9d-5jt89,Uid:40de480d-0268-4354-82c2-131a37e9e48c,Namespace:calico-system,Attempt:0,} returns sandbox id \"03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653\"" Apr 24 00:39:23.377919 containerd[1570]: 2026-04-24 00:39:08.861 [ERROR][4116] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 00:39:23.377919 containerd[1570]: 2026-04-24 00:39:10.305 [INFO][4116] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--z2r4v-eth0 coredns-7d764666f9- kube-system 742c480a-2ffe-4ce5-a8d2-bc4cb7575d82 1061 0 2026-04-24 00:36:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-z2r4v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2fe43aa54bf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" Namespace="kube-system" Pod="coredns-7d764666f9-z2r4v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z2r4v-" Apr 24 00:39:23.377919 containerd[1570]: 2026-04-24 00:39:10.374 [INFO][4116] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" Namespace="kube-system" Pod="coredns-7d764666f9-z2r4v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z2r4v-eth0" Apr 24 00:39:23.377919 containerd[1570]: 2026-04-24 00:39:14.647 [INFO][4224] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" HandleID="k8s-pod-network.7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" Workload="localhost-k8s-coredns--7d764666f9--z2r4v-eth0" Apr 24 00:39:23.381522 containerd[1570]: 2026-04-24 00:39:14.799 [INFO][4224] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" HandleID="k8s-pod-network.7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" Workload="localhost-k8s-coredns--7d764666f9--z2r4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-z2r4v", "timestamp":"2026-04-24 00:39:14.647366451 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000192000)} Apr 24 00:39:23.381522 containerd[1570]: 2026-04-24 00:39:14.826 [INFO][4224] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:39:23.381522 containerd[1570]: 2026-04-24 00:39:20.786 [INFO][4224] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:39:23.381522 containerd[1570]: 2026-04-24 00:39:20.798 [INFO][4224] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 00:39:23.381522 containerd[1570]: 2026-04-24 00:39:20.885 [INFO][4224] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" host="localhost" Apr 24 00:39:23.381522 containerd[1570]: 2026-04-24 00:39:21.275 [INFO][4224] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 00:39:23.381522 containerd[1570]: 2026-04-24 00:39:21.820 [INFO][4224] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 00:39:23.381522 containerd[1570]: 2026-04-24 00:39:22.060 [INFO][4224] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:23.381522 containerd[1570]: 2026-04-24 00:39:22.346 [INFO][4224] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:23.381522 containerd[1570]: 2026-04-24 00:39:22.361 [INFO][4224] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" host="localhost" Apr 24 00:39:23.384733 containerd[1570]: 2026-04-24 00:39:22.590 [INFO][4224] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6 Apr 24 00:39:23.384733 containerd[1570]: 2026-04-24 00:39:22.839 [INFO][4224] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" host="localhost" Apr 24 00:39:23.384733 containerd[1570]: 2026-04-24 00:39:23.098 [INFO][4224] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" host="localhost" Apr 24 00:39:23.384733 containerd[1570]: 2026-04-24 00:39:23.101 [INFO][4224] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" host="localhost" Apr 24 00:39:23.384733 containerd[1570]: 2026-04-24 00:39:23.103 [INFO][4224] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:39:23.384733 containerd[1570]: 2026-04-24 00:39:23.104 [INFO][4224] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" HandleID="k8s-pod-network.7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" Workload="localhost-k8s-coredns--7d764666f9--z2r4v-eth0" Apr 24 00:39:23.385049 containerd[1570]: 2026-04-24 00:39:23.122 [INFO][4116] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" Namespace="kube-system" Pod="coredns-7d764666f9-z2r4v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z2r4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--z2r4v-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"742c480a-2ffe-4ce5-a8d2-bc4cb7575d82", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 36, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-z2r4v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fe43aa54bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:23.385049 containerd[1570]: 2026-04-24 00:39:23.127 [INFO][4116] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" Namespace="kube-system" Pod="coredns-7d764666f9-z2r4v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z2r4v-eth0" Apr 24 00:39:23.385049 containerd[1570]: 2026-04-24 00:39:23.127 [INFO][4116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fe43aa54bf ContainerID="7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" Namespace="kube-system" Pod="coredns-7d764666f9-z2r4v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z2r4v-eth0" Apr 24 00:39:23.385049 containerd[1570]: 2026-04-24 00:39:23.142 [INFO][4116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" Namespace="kube-system" Pod="coredns-7d764666f9-z2r4v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z2r4v-eth0" Apr 24 00:39:23.385049 containerd[1570]: 2026-04-24 00:39:23.143 [INFO][4116] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" Namespace="kube-system" Pod="coredns-7d764666f9-z2r4v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z2r4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--z2r4v-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"742c480a-2ffe-4ce5-a8d2-bc4cb7575d82", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 36, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6", Pod:"coredns-7d764666f9-z2r4v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fe43aa54bf", MAC:"f2:17:25:c0:8f:f3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:23.385049 containerd[1570]: 2026-04-24 00:39:23.340 [INFO][4116] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" Namespace="kube-system" Pod="coredns-7d764666f9-z2r4v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--z2r4v-eth0" Apr 24 00:39:24.056398 containerd[1570]: time="2026-04-24T00:39:24.055070163Z" level=info msg="connecting to shim 7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6" address="unix:///run/containerd/s/2dfffe1d9cd04b74d31284383e1d3c982d2b1db9d73070ad84a4bbd87f2bcd5f" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:39:24.184736 kubelet[2842]: E0424 00:39:24.184128 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:39:24.318439 systemd-networkd[1483]: cali2fe43aa54bf: Gained IPv6LL Apr 24 00:39:24.971404 systemd[1]: Started cri-containerd-7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6.scope - libcontainer container 7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6. Apr 24 00:39:25.288529 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 00:39:26.022800 containerd[1570]: time="2026-04-24T00:39:26.022264132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-z2r4v,Uid:742c480a-2ffe-4ce5-a8d2-bc4cb7575d82,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6\"" Apr 24 00:39:26.055837 kubelet[2842]: E0424 00:39:26.055276 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:39:26.361034 containerd[1570]: time="2026-04-24T00:39:26.358097879Z" level=info msg="CreateContainer within sandbox \"7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 00:39:26.621518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3158139870.mount: Deactivated successfully. Apr 24 00:39:26.699795 containerd[1570]: time="2026-04-24T00:39:26.696195878Z" level=info msg="Container 64127005e1a800842e606bf98326dc86a71aefa6574f1ae679d41a5938ca3616: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:39:26.738705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602363719.mount: Deactivated successfully. Apr 24 00:39:26.836947 containerd[1570]: time="2026-04-24T00:39:26.836700619Z" level=info msg="CreateContainer within sandbox \"7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"64127005e1a800842e606bf98326dc86a71aefa6574f1ae679d41a5938ca3616\"" Apr 24 00:39:26.843057 containerd[1570]: time="2026-04-24T00:39:26.842981423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:39:26.854587 containerd[1570]: time="2026-04-24T00:39:26.853917580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.5: active requests=0, bytes read=8535421" Apr 24 00:39:26.915274 containerd[1570]: time="2026-04-24T00:39:26.907806500Z" level=info msg="ImageCreate event name:\"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:39:26.961050 containerd[1570]: time="2026-04-24T00:39:26.960897260Z" level=info msg="StartContainer for \"64127005e1a800842e606bf98326dc86a71aefa6574f1ae679d41a5938ca3616\"" Apr 24 00:39:26.976129 containerd[1570]: time="2026-04-24T00:39:26.975983752Z" level=info msg="connecting to shim 64127005e1a800842e606bf98326dc86a71aefa6574f1ae679d41a5938ca3616" address="unix:///run/containerd/s/2dfffe1d9cd04b74d31284383e1d3c982d2b1db9d73070ad84a4bbd87f2bcd5f" protocol=ttrpc version=3 Apr 24 00:39:27.010664 containerd[1570]: time="2026-04-24T00:39:27.010504619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:39:27.115915 containerd[1570]: time="2026-04-24T00:39:27.110802834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.5\" with image id \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\", size \"11496846\" in 7.491221068s" Apr 24 00:39:27.178185 containerd[1570]: time="2026-04-24T00:39:27.124747318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\" returns image reference \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\"" Apr 24 00:39:27.362083 systemd-networkd[1483]: calif0035634dbb: Link UP Apr 24 00:39:27.363991 systemd-networkd[1483]: calif0035634dbb: Gained carrier Apr 24 00:39:27.497344 containerd[1570]: time="2026-04-24T00:39:27.496681946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\"" Apr 24 00:39:27.881757 containerd[1570]: time="2026-04-24T00:39:27.881638041Z" level=info msg="CreateContainer within sandbox \"0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:12.190 [ERROR][4201] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:13.760 [INFO][4201] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66975fdd9d--7fq42-eth0 calico-apiserver-66975fdd9d- calico-system 093ff803-c52d-48d6-b99b-3a99f012e7a9 1076 0 2026-04-24 00:37:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66975fdd9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66975fdd9d-7fq42 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif0035634dbb [] [] }} ContainerID="29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-7fq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--7fq42-" Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:13.828 [INFO][4201] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-7fq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--7fq42-eth0" Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:14.771 [INFO][4263] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" HandleID="k8s-pod-network.29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" Workload="localhost-k8s-calico--apiserver--66975fdd9d--7fq42-eth0" Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:14.906 [INFO][4263] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" HandleID="k8s-pod-network.29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" Workload="localhost-k8s-calico--apiserver--66975fdd9d--7fq42-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000118b40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-66975fdd9d-7fq42", "timestamp":"2026-04-24 00:39:14.771202407 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000172160)} Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:14.912 [INFO][4263] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:23.109 [INFO][4263] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:23.114 [INFO][4263] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:23.341 [INFO][4263] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" host="localhost" Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:23.969 [INFO][4263] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:24.680 [INFO][4263] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:25.278 [INFO][4263] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:25.820 [INFO][4263] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:25.821 [INFO][4263] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" host="localhost" Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:26.030 [INFO][4263] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144 Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:26.607 [INFO][4263] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" host="localhost" Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:26.960 [INFO][4263] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" host="localhost" Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:26.962 [INFO][4263] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" host="localhost" Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:26.963 [INFO][4263] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:39:28.465257 containerd[1570]: 2026-04-24 00:39:26.963 [INFO][4263] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" HandleID="k8s-pod-network.29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" Workload="localhost-k8s-calico--apiserver--66975fdd9d--7fq42-eth0" Apr 24 00:39:28.563709 containerd[1570]: 2026-04-24 00:39:27.167 [INFO][4201] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-7fq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--7fq42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66975fdd9d--7fq42-eth0", GenerateName:"calico-apiserver-66975fdd9d-", Namespace:"calico-system", SelfLink:"", UID:"093ff803-c52d-48d6-b99b-3a99f012e7a9", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66975fdd9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66975fdd9d-7fq42", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif0035634dbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:28.563709 containerd[1570]: 2026-04-24 00:39:27.327 [INFO][4201] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-7fq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--7fq42-eth0" Apr 24 00:39:28.563709 containerd[1570]: 2026-04-24 00:39:27.327 [INFO][4201] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0035634dbb ContainerID="29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-7fq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--7fq42-eth0" Apr 24 00:39:28.563709 containerd[1570]: 2026-04-24 00:39:27.364 [INFO][4201] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-7fq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--7fq42-eth0" Apr 24 00:39:28.563709 containerd[1570]: 2026-04-24 00:39:27.471 [INFO][4201] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-7fq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--7fq42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66975fdd9d--7fq42-eth0", GenerateName:"calico-apiserver-66975fdd9d-", Namespace:"calico-system", SelfLink:"", UID:"093ff803-c52d-48d6-b99b-3a99f012e7a9", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66975fdd9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144", Pod:"calico-apiserver-66975fdd9d-7fq42", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif0035634dbb", MAC:"f2:99:b3:f9:ec:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:28.563709 containerd[1570]: 2026-04-24 00:39:28.414 [INFO][4201] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" Namespace="calico-system" Pod="calico-apiserver-66975fdd9d-7fq42" WorkloadEndpoint="localhost-k8s-calico--apiserver--66975fdd9d--7fq42-eth0" Apr 24 00:39:28.614124 systemd[1]: Started cri-containerd-64127005e1a800842e606bf98326dc86a71aefa6574f1ae679d41a5938ca3616.scope - libcontainer container 64127005e1a800842e606bf98326dc86a71aefa6574f1ae679d41a5938ca3616. Apr 24 00:39:28.886080 containerd[1570]: time="2026-04-24T00:39:28.884810674Z" level=info msg="Container f5b1aaeaf96515053f35ff83f402a18feea7a9d4c5f4765703983b5891fdba61: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:39:29.108173 containerd[1570]: time="2026-04-24T00:39:29.107805945Z" level=info msg="CreateContainer within sandbox \"0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f5b1aaeaf96515053f35ff83f402a18feea7a9d4c5f4765703983b5891fdba61\"" Apr 24 00:39:29.121630 systemd-networkd[1483]: calif0035634dbb: Gained IPv6LL Apr 24 00:39:29.152281 containerd[1570]: time="2026-04-24T00:39:29.150588873Z" level=info msg="StartContainer for \"f5b1aaeaf96515053f35ff83f402a18feea7a9d4c5f4765703983b5891fdba61\"" Apr 24 00:39:29.517003 containerd[1570]: time="2026-04-24T00:39:29.509683547Z" level=info msg="connecting to shim f5b1aaeaf96515053f35ff83f402a18feea7a9d4c5f4765703983b5891fdba61" address="unix:///run/containerd/s/678d6184270d1e8aab3dab3be45df04d7986d06899242228fb46f2039e9dbdc8" protocol=ttrpc version=3 Apr 24 00:39:30.093795 systemd[1]: Started cri-containerd-f5b1aaeaf96515053f35ff83f402a18feea7a9d4c5f4765703983b5891fdba61.scope - libcontainer container f5b1aaeaf96515053f35ff83f402a18feea7a9d4c5f4765703983b5891fdba61. Apr 24 00:39:30.397524 kubelet[2842]: E0424 00:39:30.390694 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:39:30.425972 containerd[1570]: time="2026-04-24T00:39:30.425341289Z" level=info msg="connecting to shim 29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144" address="unix:///run/containerd/s/8ee769a26b8763b67497a10690c10515817ed4fe9f1b7de4fab45f677a738ce9" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:39:32.167190 containerd[1570]: time="2026-04-24T00:39:32.166401150Z" level=info msg="StartContainer for \"64127005e1a800842e606bf98326dc86a71aefa6574f1ae679d41a5938ca3616\" returns successfully" Apr 24 00:39:32.546478 systemd[1]: Started cri-containerd-29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144.scope - libcontainer container 29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144. Apr 24 00:39:32.833186 kubelet[2842]: E0424 00:39:32.831605 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:39:32.961817 containerd[1570]: time="2026-04-24T00:39:32.960258790Z" level=error msg="get state for f5b1aaeaf96515053f35ff83f402a18feea7a9d4c5f4765703983b5891fdba61" error="context deadline exceeded" Apr 24 00:39:32.961817 containerd[1570]: time="2026-04-24T00:39:32.960460540Z" level=warning msg="unknown status" status=0 Apr 24 00:39:33.494019 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 00:39:33.678722 kubelet[2842]: I0424 00:39:33.677749 2842 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-z2r4v" podStartSLOduration=160.677069517 podStartE2EDuration="2m40.677069517s" podCreationTimestamp="2026-04-24 00:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:39:33.609297645 +0000 UTC m=+164.996768113" watchObservedRunningTime="2026-04-24 00:39:33.677069517 +0000 UTC m=+165.064539972" Apr 24 00:39:34.178799 kubelet[2842]: E0424 00:39:34.178626 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:39:35.225975 containerd[1570]: time="2026-04-24T00:39:35.223451499Z" level=error msg="get state for f5b1aaeaf96515053f35ff83f402a18feea7a9d4c5f4765703983b5891fdba61" error="context deadline exceeded" Apr 24 00:39:35.225975 containerd[1570]: time="2026-04-24T00:39:35.223503784Z" level=warning msg="unknown status" status=0 Apr 24 00:39:35.613520 kubelet[2842]: E0424 00:39:35.613186 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:39:36.487933 containerd[1570]: time="2026-04-24T00:39:36.487674872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66975fdd9d-7fq42,Uid:093ff803-c52d-48d6-b99b-3a99f012e7a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144\"" Apr 24 00:39:37.591466 containerd[1570]: time="2026-04-24T00:39:37.589812642Z" level=error msg="get state for f5b1aaeaf96515053f35ff83f402a18feea7a9d4c5f4765703983b5891fdba61" error="context deadline exceeded" Apr 24 00:39:37.591466 containerd[1570]: time="2026-04-24T00:39:37.590491548Z" level=warning msg="unknown status" status=0 Apr 24 00:39:37.998477 systemd-networkd[1483]: cali80cb7d1da65: Link UP Apr 24 00:39:38.004847 systemd-networkd[1483]: cali80cb7d1da65: Gained carrier Apr 24 00:39:38.617386 containerd[1570]: time="2026-04-24T00:39:38.616883596Z" level=error msg="ExecSync for \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 24 00:39:38.622207 kubelet[2842]: E0424 00:39:38.621922 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 24 00:39:38.822959 containerd[1570]: time="2026-04-24T00:39:38.773678900Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 24 00:39:38.824675 containerd[1570]: time="2026-04-24T00:39:38.824475509Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 24 00:39:38.847085 containerd[1570]: time="2026-04-24T00:39:38.828916935Z" level=error msg="ttrpc: received message on inactive stream" stream=9 Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:08.598 [ERROR][4106] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:09.665 [INFO][4106] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c8bc45cf7--shmzv-eth0 calico-kube-controllers-c8bc45cf7- calico-system 2f2e08c2-9fb6-4155-b7e3-6d9c13e99150 1066 0 2026-04-24 00:37:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c8bc45cf7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c8bc45cf7-shmzv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali80cb7d1da65 [] [] }} ContainerID="efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" Namespace="calico-system" Pod="calico-kube-controllers-c8bc45cf7-shmzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8bc45cf7--shmzv-" Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:09.682 [INFO][4106] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" Namespace="calico-system" Pod="calico-kube-controllers-c8bc45cf7-shmzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8bc45cf7--shmzv-eth0" Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:14.868 [INFO][4213] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" HandleID="k8s-pod-network.efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" Workload="localhost-k8s-calico--kube--controllers--c8bc45cf7--shmzv-eth0" Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:15.067 [INFO][4213] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" HandleID="k8s-pod-network.efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" Workload="localhost-k8s-calico--kube--controllers--c8bc45cf7--shmzv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00045a560), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c8bc45cf7-shmzv", "timestamp":"2026-04-24 00:39:14.868896796 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004d89a0)} Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:15.068 [INFO][4213] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:26.964 [INFO][4213] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:26.964 [INFO][4213] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:27.897 [INFO][4213] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" host="localhost" Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:28.463 [INFO][4213] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:29.366 [INFO][4213] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:30.392 [INFO][4213] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:32.662 [INFO][4213] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:32.666 [INFO][4213] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" host="localhost" Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:33.009 [INFO][4213] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14 Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:34.047 [INFO][4213] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" host="localhost" Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:36.678 [INFO][4213] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" host="localhost" Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:36.779 [INFO][4213] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" host="localhost" Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:36.874 [INFO][4213] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:39:39.811225 containerd[1570]: 2026-04-24 00:39:37.015 [INFO][4213] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" HandleID="k8s-pod-network.efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" Workload="localhost-k8s-calico--kube--controllers--c8bc45cf7--shmzv-eth0" Apr 24 00:39:39.818672 containerd[1570]: 2026-04-24 00:39:37.625 [INFO][4106] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" Namespace="calico-system" Pod="calico-kube-controllers-c8bc45cf7-shmzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8bc45cf7--shmzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c8bc45cf7--shmzv-eth0", GenerateName:"calico-kube-controllers-c8bc45cf7-", Namespace:"calico-system", SelfLink:"", UID:"2f2e08c2-9fb6-4155-b7e3-6d9c13e99150", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8bc45cf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c8bc45cf7-shmzv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80cb7d1da65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:39.818672 containerd[1570]: 2026-04-24 00:39:37.650 [INFO][4106] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" Namespace="calico-system" Pod="calico-kube-controllers-c8bc45cf7-shmzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8bc45cf7--shmzv-eth0" Apr 24 00:39:39.818672 containerd[1570]: 2026-04-24 00:39:37.653 [INFO][4106] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80cb7d1da65 ContainerID="efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" Namespace="calico-system" Pod="calico-kube-controllers-c8bc45cf7-shmzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8bc45cf7--shmzv-eth0" Apr 24 00:39:39.818672 containerd[1570]: 2026-04-24 00:39:38.041 [INFO][4106] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" Namespace="calico-system" Pod="calico-kube-controllers-c8bc45cf7-shmzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8bc45cf7--shmzv-eth0" Apr 24 00:39:39.818672 containerd[1570]: 2026-04-24 00:39:38.058 [INFO][4106] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" Namespace="calico-system" Pod="calico-kube-controllers-c8bc45cf7-shmzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8bc45cf7--shmzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c8bc45cf7--shmzv-eth0", GenerateName:"calico-kube-controllers-c8bc45cf7-", Namespace:"calico-system", SelfLink:"", UID:"2f2e08c2-9fb6-4155-b7e3-6d9c13e99150", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8bc45cf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14", Pod:"calico-kube-controllers-c8bc45cf7-shmzv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80cb7d1da65", MAC:"12:61:5b:1b:fb:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:39.818672 containerd[1570]: 2026-04-24 00:39:39.614 [INFO][4106] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" Namespace="calico-system" Pod="calico-kube-controllers-c8bc45cf7-shmzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8bc45cf7--shmzv-eth0" Apr 24 00:39:40.006804 systemd-networkd[1483]: cali80cb7d1da65: Gained IPv6LL Apr 24 00:39:40.162334 kubelet[2842]: E0424 00:39:40.157230 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:39:40.507983 containerd[1570]: time="2026-04-24T00:39:40.505467752Z" level=info msg="StartContainer for \"f5b1aaeaf96515053f35ff83f402a18feea7a9d4c5f4765703983b5891fdba61\" returns successfully" Apr 24 00:39:41.128835 containerd[1570]: time="2026-04-24T00:39:41.128659624Z" level=info msg="connecting to shim efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14" address="unix:///run/containerd/s/a9eccec487f611f079856a5d0740f22f5e31817ab26290689d69a24ecf65f7d4" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:39:42.243316 containerd[1570]: time="2026-04-24T00:39:42.243087793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:39:42.269091 containerd[1570]: time="2026-04-24T00:39:42.266405203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.5: active requests=0, bytes read=6050387" Apr 24 00:39:42.378728 containerd[1570]: time="2026-04-24T00:39:42.378591403Z" level=info msg="ImageCreate event name:\"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:39:42.739531 systemd[1]: Started cri-containerd-efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14.scope - libcontainer container efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14. Apr 24 00:39:43.045334 containerd[1570]: time="2026-04-24T00:39:43.043545538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:39:43.081924 containerd[1570]: time="2026-04-24T00:39:43.079411480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.5\" with image id \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\", size \"9011804\" in 15.582616352s" Apr 24 00:39:43.081924 containerd[1570]: time="2026-04-24T00:39:43.079822851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\" returns image reference \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\"" Apr 24 00:39:43.168500 containerd[1570]: time="2026-04-24T00:39:43.168294872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 24 00:39:43.210694 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 00:39:43.458166 containerd[1570]: time="2026-04-24T00:39:43.457614074Z" level=info msg="CreateContainer within sandbox \"ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 00:39:43.661751 containerd[1570]: time="2026-04-24T00:39:43.661659223Z" level=info msg="Container 1426e5c6f4a2f9fcd83e51251f8798f6fbf768c172b799a3284c464a06729bec: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:39:43.979008 containerd[1570]: time="2026-04-24T00:39:43.978534698Z" level=info msg="CreateContainer within sandbox \"ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1426e5c6f4a2f9fcd83e51251f8798f6fbf768c172b799a3284c464a06729bec\"" Apr 24 00:39:44.096708 containerd[1570]: time="2026-04-24T00:39:44.096592840Z" level=info msg="StartContainer for \"1426e5c6f4a2f9fcd83e51251f8798f6fbf768c172b799a3284c464a06729bec\"" Apr 24 00:39:44.342188 containerd[1570]: time="2026-04-24T00:39:44.341820920Z" level=info msg="connecting to shim 1426e5c6f4a2f9fcd83e51251f8798f6fbf768c172b799a3284c464a06729bec" address="unix:///run/containerd/s/95f41cbc422e0865a84a4f6b033a0be94655562d7e52912d8de954c0a85239bd" protocol=ttrpc version=3 Apr 24 00:39:44.936506 systemd[1]: Started cri-containerd-1426e5c6f4a2f9fcd83e51251f8798f6fbf768c172b799a3284c464a06729bec.scope - libcontainer container 1426e5c6f4a2f9fcd83e51251f8798f6fbf768c172b799a3284c464a06729bec. Apr 24 00:39:45.498015 containerd[1570]: time="2026-04-24T00:39:45.497817651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8bc45cf7-shmzv,Uid:2f2e08c2-9fb6-4155-b7e3-6d9c13e99150,Namespace:calico-system,Attempt:0,} returns sandbox id \"efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14\"" Apr 24 00:39:45.820490 systemd-networkd[1483]: cali40a00ef09cf: Link UP Apr 24 00:39:45.827989 systemd-networkd[1483]: cali40a00ef09cf: Gained carrier Apr 24 00:39:45.882930 containerd[1570]: time="2026-04-24T00:39:45.882118153Z" level=info msg="StartContainer for \"1426e5c6f4a2f9fcd83e51251f8798f6fbf768c172b799a3284c464a06729bec\" returns successfully" Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:11.520 [ERROR][4165] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:13.451 [INFO][4165] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7fb6cdc5d9--bxjp2-eth0 goldmane-7fb6cdc5d9- calico-system e0e7964a-87e1-45b9-8f9b-26c7b6d887fe 1074 0 2026-04-24 00:37:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7fb6cdc5d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7fb6cdc5d9-bxjp2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali40a00ef09cf [] [] }} ContainerID="8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-bxjp2" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--bxjp2-" Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:13.499 [INFO][4165] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-bxjp2" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--bxjp2-eth0" Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:15.069 [INFO][4256] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" HandleID="k8s-pod-network.8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" Workload="localhost-k8s-goldmane--7fb6cdc5d9--bxjp2-eth0" Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:15.250 [INFO][4256] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" HandleID="k8s-pod-network.8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" Workload="localhost-k8s-goldmane--7fb6cdc5d9--bxjp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000481d90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7fb6cdc5d9-bxjp2", "timestamp":"2026-04-24 00:39:15.06906948 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004a74a0)} Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:15.250 [INFO][4256] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:36.802 [INFO][4256] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:37.157 [INFO][4256] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:38.145 [INFO][4256] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" host="localhost" Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:40.203 [INFO][4256] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:42.181 [INFO][4256] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:42.826 [INFO][4256] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:43.599 [INFO][4256] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:43.619 [INFO][4256] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" host="localhost" Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:44.096 [INFO][4256] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2 Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:44.855 [INFO][4256] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" host="localhost" Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:45.515 [INFO][4256] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" host="localhost" Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:45.520 [INFO][4256] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" host="localhost" Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:45.533 [INFO][4256] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:39:46.442938 containerd[1570]: 2026-04-24 00:39:45.543 [INFO][4256] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" HandleID="k8s-pod-network.8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" Workload="localhost-k8s-goldmane--7fb6cdc5d9--bxjp2-eth0" Apr 24 00:39:46.493410 containerd[1570]: 2026-04-24 00:39:45.673 [INFO][4165] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-bxjp2" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--bxjp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7fb6cdc5d9--bxjp2-eth0", GenerateName:"goldmane-7fb6cdc5d9-", Namespace:"calico-system", SelfLink:"", UID:"e0e7964a-87e1-45b9-8f9b-26c7b6d887fe", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7fb6cdc5d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7fb6cdc5d9-bxjp2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40a00ef09cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:46.493410 containerd[1570]: 2026-04-24 00:39:45.674 [INFO][4165] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-bxjp2" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--bxjp2-eth0" Apr 24 00:39:46.493410 containerd[1570]: 2026-04-24 00:39:45.674 [INFO][4165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40a00ef09cf ContainerID="8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-bxjp2" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--bxjp2-eth0" Apr 24 00:39:46.493410 containerd[1570]: 2026-04-24 00:39:45.827 [INFO][4165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-bxjp2" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--bxjp2-eth0" Apr 24 00:39:46.493410 containerd[1570]: 2026-04-24 00:39:45.828 [INFO][4165] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-bxjp2" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--bxjp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7fb6cdc5d9--bxjp2-eth0", GenerateName:"goldmane-7fb6cdc5d9-", Namespace:"calico-system", SelfLink:"", UID:"e0e7964a-87e1-45b9-8f9b-26c7b6d887fe", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7fb6cdc5d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2", Pod:"goldmane-7fb6cdc5d9-bxjp2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40a00ef09cf", MAC:"2e:53:66:8a:ec:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:46.493410 containerd[1570]: 2026-04-24 00:39:46.435 [INFO][4165] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-bxjp2" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--bxjp2-eth0" Apr 24 00:39:46.842821 containerd[1570]: time="2026-04-24T00:39:46.842686173Z" level=info msg="connecting to shim 8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2" address="unix:///run/containerd/s/116cc232891da72d0df252d7a65cd2764b2f216cde16229cab029f9f44e2e21e" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:39:47.422798 systemd-networkd[1483]: cali40a00ef09cf: Gained IPv6LL Apr 24 00:39:47.977939 systemd[1]: Started cri-containerd-8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2.scope - libcontainer container 8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2. Apr 24 00:39:48.877748 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 00:39:49.300640 containerd[1570]: time="2026-04-24T00:39:49.291968761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-bxjp2,Uid:e0e7964a-87e1-45b9-8f9b-26c7b6d887fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2\"" Apr 24 00:39:49.478348 systemd-networkd[1483]: cali2cfad7b4e3b: Link UP Apr 24 00:39:49.501177 systemd-networkd[1483]: cali2cfad7b4e3b: Gained carrier Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:11.573 [ERROR][4175] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:13.451 [INFO][4175] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--dpk5v-eth0 coredns-7d764666f9- kube-system e790c486-e226-481e-a682-c42b85775d13 1083 0 2026-04-24 00:36:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-dpk5v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2cfad7b4e3b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" Namespace="kube-system" Pod="coredns-7d764666f9-dpk5v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--dpk5v-" Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:13.456 [INFO][4175] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" Namespace="kube-system" Pod="coredns-7d764666f9-dpk5v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--dpk5v-eth0" Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:15.068 [INFO][4249] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" HandleID="k8s-pod-network.87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" Workload="localhost-k8s-coredns--7d764666f9--dpk5v-eth0" Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:15.250 [INFO][4249] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" HandleID="k8s-pod-network.87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" Workload="localhost-k8s-coredns--7d764666f9--dpk5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000474100), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-dpk5v", "timestamp":"2026-04-24 00:39:15.068312219 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000646000)} Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:15.251 [INFO][4249] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:45.538 [INFO][4249] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:45.539 [INFO][4249] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:45.829 [INFO][4249] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" host="localhost" Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:46.384 [INFO][4249] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:47.782 [INFO][4249] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:48.078 [INFO][4249] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:48.572 [INFO][4249] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:48.588 [INFO][4249] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" host="localhost" Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:48.868 [INFO][4249] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:48.966 [INFO][4249] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" host="localhost" Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:49.268 [INFO][4249] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" host="localhost" Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:49.322 [INFO][4249] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" host="localhost" Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:49.368 [INFO][4249] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:39:50.250943 containerd[1570]: 2026-04-24 00:39:49.396 [INFO][4249] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" HandleID="k8s-pod-network.87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" Workload="localhost-k8s-coredns--7d764666f9--dpk5v-eth0" Apr 24 00:39:50.296006 containerd[1570]: 2026-04-24 00:39:49.457 [INFO][4175] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" Namespace="kube-system" Pod="coredns-7d764666f9-dpk5v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--dpk5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--dpk5v-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"e790c486-e226-481e-a682-c42b85775d13", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 36, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-dpk5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2cfad7b4e3b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:50.296006 containerd[1570]: 2026-04-24 00:39:49.471 [INFO][4175] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" Namespace="kube-system" Pod="coredns-7d764666f9-dpk5v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--dpk5v-eth0" Apr 24 00:39:50.296006 containerd[1570]: 2026-04-24 00:39:49.471 [INFO][4175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cfad7b4e3b ContainerID="87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" Namespace="kube-system" Pod="coredns-7d764666f9-dpk5v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--dpk5v-eth0" Apr 24 00:39:50.296006 containerd[1570]: 2026-04-24 00:39:49.581 [INFO][4175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" Namespace="kube-system" Pod="coredns-7d764666f9-dpk5v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--dpk5v-eth0" Apr 24 00:39:50.296006 containerd[1570]: 2026-04-24 00:39:49.681 [INFO][4175] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" Namespace="kube-system" Pod="coredns-7d764666f9-dpk5v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--dpk5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--dpk5v-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"e790c486-e226-481e-a682-c42b85775d13", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 36, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c", Pod:"coredns-7d764666f9-dpk5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2cfad7b4e3b", MAC:"b2:1b:ca:9c:a7:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:39:50.296006 containerd[1570]: 2026-04-24 00:39:50.228 [INFO][4175] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" Namespace="kube-system" Pod="coredns-7d764666f9-dpk5v" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--dpk5v-eth0" Apr 24 00:39:50.967537 containerd[1570]: time="2026-04-24T00:39:50.967040689Z" level=info msg="connecting to shim 87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c" address="unix:///run/containerd/s/ff79db4dbdb7b3227e740cffb724d9dd1e90e9bb644df758fddffb3f3559cf23" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:39:51.502505 systemd[1]: Started cri-containerd-87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c.scope - libcontainer container 87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c. Apr 24 00:39:51.518305 systemd-networkd[1483]: cali2cfad7b4e3b: Gained IPv6LL Apr 24 00:39:51.821789 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 00:39:53.974654 containerd[1570]: time="2026-04-24T00:39:53.974480410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-dpk5v,Uid:e790c486-e226-481e-a682-c42b85775d13,Namespace:kube-system,Attempt:0,} returns sandbox id \"87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c\"" Apr 24 00:39:54.076942 kubelet[2842]: E0424 00:39:54.076343 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:39:54.318492 update_engine[1559]: I20260424 00:39:54.314929 1559 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 24 00:39:54.318492 update_engine[1559]: I20260424 00:39:54.315271 1559 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 24 00:39:54.386092 update_engine[1559]: I20260424 00:39:54.384330 1559 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 24 00:39:54.429210 update_engine[1559]: I20260424 00:39:54.427289 1559 omaha_request_params.cc:62] Current group set to stable Apr 24 00:39:54.506183 update_engine[1559]: I20260424 00:39:54.505366 1559 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 24 00:39:54.514446 update_engine[1559]: I20260424 00:39:54.506754 1559 update_attempter.cc:643] Scheduling an action processor start. Apr 24 00:39:54.514446 update_engine[1559]: I20260424 00:39:54.507601 1559 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 24 00:39:54.514446 update_engine[1559]: I20260424 00:39:54.512271 1559 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 24 00:39:54.526776 update_engine[1559]: I20260424 00:39:54.514693 1559 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 24 00:39:54.526776 update_engine[1559]: I20260424 00:39:54.516073 1559 omaha_request_action.cc:272] Request: Apr 24 00:39:54.526776 update_engine[1559]: Apr 24 00:39:54.526776 update_engine[1559]: Apr 24 00:39:54.526776 update_engine[1559]: Apr 24 00:39:54.526776 update_engine[1559]: Apr 24 00:39:54.526776 update_engine[1559]: Apr 24 00:39:54.526776 update_engine[1559]: Apr 24 00:39:54.526776 update_engine[1559]: Apr 24 00:39:54.526776 update_engine[1559]: Apr 24 00:39:54.526776 update_engine[1559]: I20260424 00:39:54.516206 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 00:39:54.597279 update_engine[1559]: I20260424 00:39:54.594326 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 00:39:54.610128 update_engine[1559]: I20260424 00:39:54.595615 1559 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 00:39:54.779800 update_engine[1559]: E20260424 00:39:54.776126 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 00:39:54.875790 update_engine[1559]: I20260424 00:39:54.828722 1559 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 24 00:39:54.955970 locksmithd[1600]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 24 00:39:55.877481 containerd[1570]: time="2026-04-24T00:39:55.873052653Z" level=info msg="CreateContainer within sandbox \"87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 00:39:56.498314 kubelet[2842]: E0424 00:39:56.488080 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:39:58.369312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount525108905.mount: Deactivated successfully. Apr 24 00:39:59.216164 containerd[1570]: time="2026-04-24T00:39:59.188374528Z" level=info msg="Container 6fb8e2210a6459ba7959800891d5c52cdc8cf7019aeec4daa39e21ca19d25a0a: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:39:59.192584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3568855403.mount: Deactivated successfully. Apr 24 00:40:00.629546 containerd[1570]: time="2026-04-24T00:40:00.584848791Z" level=info msg="CreateContainer within sandbox \"87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6fb8e2210a6459ba7959800891d5c52cdc8cf7019aeec4daa39e21ca19d25a0a\"" Apr 24 00:40:00.924369 containerd[1570]: time="2026-04-24T00:40:00.919648366Z" level=info msg="StartContainer for \"6fb8e2210a6459ba7959800891d5c52cdc8cf7019aeec4daa39e21ca19d25a0a\"" Apr 24 00:40:00.971017 containerd[1570]: time="2026-04-24T00:40:00.968253228Z" level=info msg="connecting to shim 6fb8e2210a6459ba7959800891d5c52cdc8cf7019aeec4daa39e21ca19d25a0a" address="unix:///run/containerd/s/ff79db4dbdb7b3227e740cffb724d9dd1e90e9bb644df758fddffb3f3559cf23" protocol=ttrpc version=3 Apr 24 00:40:02.418778 systemd[1]: Started cri-containerd-6fb8e2210a6459ba7959800891d5c52cdc8cf7019aeec4daa39e21ca19d25a0a.scope - libcontainer container 6fb8e2210a6459ba7959800891d5c52cdc8cf7019aeec4daa39e21ca19d25a0a. Apr 24 00:40:02.804613 kubelet[2842]: E0424 00:40:02.600811 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.334s" Apr 24 00:40:04.922495 kubelet[2842]: E0424 00:40:04.869835 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.008s" Apr 24 00:40:05.021362 containerd[1570]: time="2026-04-24T00:40:05.020460296Z" level=info msg="StartContainer for \"6fb8e2210a6459ba7959800891d5c52cdc8cf7019aeec4daa39e21ca19d25a0a\" returns successfully" Apr 24 00:40:05.051353 update_engine[1559]: I20260424 00:40:05.047610 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 00:40:05.071661 update_engine[1559]: I20260424 00:40:05.063977 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 00:40:05.094131 update_engine[1559]: I20260424 00:40:05.093078 1559 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 00:40:05.116003 update_engine[1559]: E20260424 00:40:05.113291 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 00:40:05.116003 update_engine[1559]: I20260424 00:40:05.115135 1559 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 24 00:40:06.812233 kubelet[2842]: E0424 00:40:06.812177 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:40:08.006705 kubelet[2842]: I0424 00:40:08.005938 2842 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-dpk5v" podStartSLOduration=195.005759705 podStartE2EDuration="3m15.005759705s" podCreationTimestamp="2026-04-24 00:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:40:07.760832658 +0000 UTC m=+199.148303130" watchObservedRunningTime="2026-04-24 00:40:08.005759705 +0000 UTC m=+199.393230161" Apr 24 00:40:11.869185 kubelet[2842]: E0424 00:40:11.864691 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:40:13.075060 kubelet[2842]: E0424 00:40:13.071679 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:40:15.069945 update_engine[1559]: I20260424 00:40:15.061652 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 00:40:15.097677 update_engine[1559]: I20260424 00:40:15.086477 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 00:40:15.191042 update_engine[1559]: I20260424 00:40:15.189721 1559 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 00:40:15.297080 update_engine[1559]: E20260424 00:40:15.287746 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 00:40:15.326149 update_engine[1559]: I20260424 00:40:15.316803 1559 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 24 00:40:23.563610 systemd[1]: cri-containerd-14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d.scope: Deactivated successfully. Apr 24 00:40:23.578457 systemd[1]: cri-containerd-14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d.scope: Consumed 50.264s CPU time, 91.1M memory peak, 37M read from disk. Apr 24 00:40:24.108114 containerd[1570]: time="2026-04-24T00:40:24.034821212Z" level=info msg="received container exit event container_id:\"14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d\" id:\"14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d\" pid:2695 exit_status:1 exited_at:{seconds:1776991223 nanos:789689398}" Apr 24 00:40:26.104107 update_engine[1559]: I20260424 00:40:26.100004 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 00:40:26.118635 update_engine[1559]: I20260424 00:40:26.109752 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 00:40:26.174805 update_engine[1559]: I20260424 00:40:26.172401 1559 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 00:40:26.208070 update_engine[1559]: E20260424 00:40:26.203668 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 00:40:26.300188 update_engine[1559]: I20260424 00:40:26.291831 1559 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 24 00:40:26.321235 update_engine[1559]: I20260424 00:40:26.303583 1559 omaha_request_action.cc:617] Omaha request response: Apr 24 00:40:26.343757 update_engine[1559]: E20260424 00:40:26.324133 1559 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 24 00:40:26.367819 update_engine[1559]: I20260424 00:40:26.337759 1559 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 24 00:40:26.367819 update_engine[1559]: I20260424 00:40:26.353494 1559 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 24 00:40:26.409418 update_engine[1559]: I20260424 00:40:26.363830 1559 update_attempter.cc:306] Processing Done. Apr 24 00:40:26.409418 update_engine[1559]: E20260424 00:40:26.390439 1559 update_attempter.cc:619] Update failed. Apr 24 00:40:26.409418 update_engine[1559]: I20260424 00:40:26.397969 1559 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 24 00:40:26.486627 update_engine[1559]: I20260424 00:40:26.405850 1559 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 24 00:40:26.486627 update_engine[1559]: I20260424 00:40:26.413543 1559 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 24 00:40:26.486627 update_engine[1559]: I20260424 00:40:26.455448 1559 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 24 00:40:26.486627 update_engine[1559]: I20260424 00:40:26.458947 1559 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 24 00:40:26.486627 update_engine[1559]: I20260424 00:40:26.459071 1559 omaha_request_action.cc:272] Request: Apr 24 00:40:26.486627 update_engine[1559]: Apr 24 00:40:26.486627 update_engine[1559]: Apr 24 00:40:26.486627 update_engine[1559]: Apr 24 00:40:26.486627 update_engine[1559]: Apr 24 00:40:26.486627 update_engine[1559]: Apr 24 00:40:26.486627 update_engine[1559]: Apr 24 00:40:26.486627 update_engine[1559]: I20260424 00:40:26.459128 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 00:40:26.486627 update_engine[1559]: I20260424 00:40:26.479533 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 00:40:26.588047 update_engine[1559]: I20260424 00:40:26.524384 1559 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 00:40:26.608075 update_engine[1559]: E20260424 00:40:26.604848 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 00:40:26.623429 update_engine[1559]: I20260424 00:40:26.619151 1559 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 24 00:40:26.640085 update_engine[1559]: I20260424 00:40:26.635555 1559 omaha_request_action.cc:617] Omaha request response: Apr 24 00:40:26.640085 update_engine[1559]: I20260424 00:40:26.639430 1559 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 24 00:40:26.640085 update_engine[1559]: I20260424 00:40:26.639770 1559 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 24 00:40:26.640085 update_engine[1559]: I20260424 00:40:26.639839 1559 update_attempter.cc:306] Processing Done. Apr 24 00:40:26.659784 update_engine[1559]: I20260424 00:40:26.643481 1559 update_attempter.cc:310] Error event sent. Apr 24 00:40:26.659784 update_engine[1559]: I20260424 00:40:26.643800 1559 update_check_scheduler.cc:74] Next update check in 40m28s Apr 24 00:40:26.889460 locksmithd[1600]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 24 00:40:28.121970 locksmithd[1600]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 24 00:40:29.679074 containerd[1570]: time="2026-04-24T00:40:29.678208084Z" level=error msg="ExecSync for \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 24 00:40:30.562577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d-rootfs.mount: Deactivated successfully. Apr 24 00:40:34.857072 kubelet[2842]: E0424 00:40:33.353774 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 24 00:40:42.761099 kubelet[2842]: E0424 00:40:42.759372 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="33.454s" Apr 24 00:40:44.891890 kubelet[2842]: E0424 00:40:44.891547 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:40:48.774965 containerd[1570]: time="2026-04-24T00:40:48.774283412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:40:49.063934 containerd[1570]: time="2026-04-24T00:40:49.050799277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=46175896" Apr 24 00:40:49.388131 containerd[1570]: time="2026-04-24T00:40:49.311778197Z" level=info msg="ImageCreate event name:\"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:40:49.698535 containerd[1570]: time="2026-04-24T00:40:49.696182410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:40:49.792900 containerd[1570]: time="2026-04-24T00:40:49.792007207Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 1m6.614669475s" Apr 24 00:40:49.800229 containerd[1570]: time="2026-04-24T00:40:49.799065027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 24 00:40:49.979457 kubelet[2842]: E0424 00:40:49.977216 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.799s" Apr 24 00:40:50.023060 systemd[1]: cri-containerd-afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd.scope: Deactivated successfully. Apr 24 00:40:50.028588 systemd[1]: cri-containerd-afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd.scope: Consumed 32.634s CPU time, 99.8M memory peak, 8.7M read from disk. Apr 24 00:40:50.633179 containerd[1570]: time="2026-04-24T00:40:50.632980581Z" level=info msg="received container exit event container_id:\"afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd\" id:\"afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd\" pid:3183 exit_status:1 exited_at:{seconds:1776991250 nanos:510180768}" Apr 24 00:40:51.154777 containerd[1570]: time="2026-04-24T00:40:51.154553362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 24 00:40:51.695351 kubelet[2842]: E0424 00:40:51.607568 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.63s" Apr 24 00:40:52.519897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd-rootfs.mount: Deactivated successfully. Apr 24 00:40:52.619842 containerd[1570]: time="2026-04-24T00:40:52.619750313Z" level=info msg="CreateContainer within sandbox \"03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 00:40:53.073359 containerd[1570]: time="2026-04-24T00:40:53.071431075Z" level=info msg="Container 8ab5c2c21bd65b272afb5021b290cfae1d49d9412aad943194b5b0ef44cdfc17: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:40:53.084476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount696227395.mount: Deactivated successfully. Apr 24 00:40:53.452252 kubelet[2842]: E0424 00:40:53.449284 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.755s" Apr 24 00:40:53.544989 containerd[1570]: time="2026-04-24T00:40:53.522414012Z" level=info msg="CreateContainer within sandbox \"03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8ab5c2c21bd65b272afb5021b290cfae1d49d9412aad943194b5b0ef44cdfc17\"" Apr 24 00:40:53.598204 kubelet[2842]: I0424 00:40:53.598153 2842 scope.go:122] "RemoveContainer" containerID="14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d" Apr 24 00:40:53.598647 kubelet[2842]: E0424 00:40:53.598632 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:40:53.695170 containerd[1570]: time="2026-04-24T00:40:53.692743981Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:40:53.706319 containerd[1570]: time="2026-04-24T00:40:53.703009684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=77" Apr 24 00:40:53.894464 containerd[1570]: time="2026-04-24T00:40:53.894295871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 2.73961717s" Apr 24 00:40:53.894464 containerd[1570]: time="2026-04-24T00:40:53.894375952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 24 00:40:53.898885 kubelet[2842]: E0424 00:40:53.897391 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:40:53.898984 containerd[1570]: time="2026-04-24T00:40:53.898395401Z" level=info msg="StartContainer for \"8ab5c2c21bd65b272afb5021b290cfae1d49d9412aad943194b5b0ef44cdfc17\"" Apr 24 00:40:53.955418 containerd[1570]: time="2026-04-24T00:40:53.955242069Z" level=info msg="connecting to shim 8ab5c2c21bd65b272afb5021b290cfae1d49d9412aad943194b5b0ef44cdfc17" address="unix:///run/containerd/s/f2dd0e7d091a6a06381be00e2c3171a755123fca63a5d7ebc83068e3db98af21" protocol=ttrpc version=3 Apr 24 00:40:54.419355 kubelet[2842]: E0424 00:40:54.419135 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:40:54.959269 kubelet[2842]: E0424 00:40:54.787761 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.331s" Apr 24 00:40:55.117291 kubelet[2842]: E0424 00:40:55.116489 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:40:55.307045 containerd[1570]: time="2026-04-24T00:40:55.306356113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\"" Apr 24 00:40:56.309706 systemd[1]: Started cri-containerd-8ab5c2c21bd65b272afb5021b290cfae1d49d9412aad943194b5b0ef44cdfc17.scope - libcontainer container 8ab5c2c21bd65b272afb5021b290cfae1d49d9412aad943194b5b0ef44cdfc17. Apr 24 00:40:56.858972 containerd[1570]: time="2026-04-24T00:40:56.856447684Z" level=info msg="CreateContainer within sandbox \"c57ceee4282bd3e942797bae4868969cef2d4e88770c579d035321d2d7089226\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 24 00:40:57.871001 containerd[1570]: time="2026-04-24T00:40:57.870918802Z" level=info msg="Container 898b2fc2ab71b4e44911424d8679e70343cbd53bdee14866f5810ff9b9011671: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:40:58.460140 containerd[1570]: time="2026-04-24T00:40:58.450837487Z" level=info msg="CreateContainer within sandbox \"29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 00:41:00.195029 containerd[1570]: time="2026-04-24T00:41:00.190597571Z" level=info msg="CreateContainer within sandbox \"c57ceee4282bd3e942797bae4868969cef2d4e88770c579d035321d2d7089226\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"898b2fc2ab71b4e44911424d8679e70343cbd53bdee14866f5810ff9b9011671\"" Apr 24 00:41:00.692766 containerd[1570]: time="2026-04-24T00:41:00.658431315Z" level=error msg="get state for 8ab5c2c21bd65b272afb5021b290cfae1d49d9412aad943194b5b0ef44cdfc17" error="context deadline exceeded" Apr 24 00:41:00.704722 containerd[1570]: time="2026-04-24T00:41:00.696242103Z" level=warning msg="unknown status" status=0 Apr 24 00:41:01.661515 containerd[1570]: time="2026-04-24T00:41:01.661216153Z" level=info msg="StartContainer for \"898b2fc2ab71b4e44911424d8679e70343cbd53bdee14866f5810ff9b9011671\"" Apr 24 00:41:01.914596 containerd[1570]: time="2026-04-24T00:41:01.913283767Z" level=info msg="Container 1e1ef1d7c1adeaa4c1f4e99b102ff658be05f1b0b5e24e046dfec134d51a305b: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:41:02.005354 containerd[1570]: time="2026-04-24T00:41:01.925482204Z" level=info msg="connecting to shim 898b2fc2ab71b4e44911424d8679e70343cbd53bdee14866f5810ff9b9011671" address="unix:///run/containerd/s/97bc58408e81eadf2df47d3b43452fc839e7c18c7a9fd77c41fbc64fb6f7a86e" protocol=ttrpc version=3 Apr 24 00:41:02.041089 containerd[1570]: time="2026-04-24T00:41:02.040135973Z" level=error msg="ExecSync for \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 24 00:41:02.057358 kubelet[2842]: E0424 00:41:02.056847 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 24 00:41:03.567155 containerd[1570]: time="2026-04-24T00:41:03.565025841Z" level=error msg="get state for 8ab5c2c21bd65b272afb5021b290cfae1d49d9412aad943194b5b0ef44cdfc17" error="context deadline exceeded" Apr 24 00:41:03.596087 containerd[1570]: time="2026-04-24T00:41:03.592767061Z" level=warning msg="unknown status" status=0 Apr 24 00:41:03.713992 containerd[1570]: time="2026-04-24T00:41:03.709465541Z" level=info msg="CreateContainer within sandbox \"29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1e1ef1d7c1adeaa4c1f4e99b102ff658be05f1b0b5e24e046dfec134d51a305b\"" Apr 24 00:41:05.407291 containerd[1570]: time="2026-04-24T00:41:05.406163984Z" level=info msg="StartContainer for \"1e1ef1d7c1adeaa4c1f4e99b102ff658be05f1b0b5e24e046dfec134d51a305b\"" Apr 24 00:41:06.382564 systemd[1]: Started cri-containerd-898b2fc2ab71b4e44911424d8679e70343cbd53bdee14866f5810ff9b9011671.scope - libcontainer container 898b2fc2ab71b4e44911424d8679e70343cbd53bdee14866f5810ff9b9011671. Apr 24 00:41:06.984962 kubelet[2842]: E0424 00:41:06.984722 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.588s" Apr 24 00:41:07.206274 containerd[1570]: time="2026-04-24T00:41:07.205985468Z" level=info msg="connecting to shim 1e1ef1d7c1adeaa4c1f4e99b102ff658be05f1b0b5e24e046dfec134d51a305b" address="unix:///run/containerd/s/8ee769a26b8763b67497a10690c10515817ed4fe9f1b7de4fab45f677a738ce9" protocol=ttrpc version=3 Apr 24 00:41:07.726328 containerd[1570]: time="2026-04-24T00:41:07.420392462Z" level=error msg="get state for 8ab5c2c21bd65b272afb5021b290cfae1d49d9412aad943194b5b0ef44cdfc17" error="context deadline exceeded" Apr 24 00:41:07.851445 containerd[1570]: time="2026-04-24T00:41:07.733638935Z" level=warning msg="unknown status" status=0 Apr 24 00:41:12.450146 kubelet[2842]: I0424 00:41:12.449724 2842 scope.go:122] "RemoveContainer" containerID="afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd" Apr 24 00:41:13.242460 systemd[1]: Started cri-containerd-1e1ef1d7c1adeaa4c1f4e99b102ff658be05f1b0b5e24e046dfec134d51a305b.scope - libcontainer container 1e1ef1d7c1adeaa4c1f4e99b102ff658be05f1b0b5e24e046dfec134d51a305b. Apr 24 00:41:14.375439 containerd[1570]: time="2026-04-24T00:41:14.374201710Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 24 00:41:14.572816 containerd[1570]: time="2026-04-24T00:41:14.572350756Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 24 00:41:14.578838 containerd[1570]: time="2026-04-24T00:41:14.578614669Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 24 00:41:15.084058 kubelet[2842]: E0424 00:41:15.082380 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:41:17.095128 containerd[1570]: time="2026-04-24T00:41:17.087245074Z" level=info msg="StartContainer for \"898b2fc2ab71b4e44911424d8679e70343cbd53bdee14866f5810ff9b9011671\" returns successfully" Apr 24 00:41:18.356884 containerd[1570]: time="2026-04-24T00:41:18.356803665Z" level=info msg="StartContainer for \"8ab5c2c21bd65b272afb5021b290cfae1d49d9412aad943194b5b0ef44cdfc17\" returns successfully" Apr 24 00:41:18.384102 containerd[1570]: time="2026-04-24T00:41:18.381721005Z" level=info msg="CreateContainer within sandbox \"29c9b8e1e5ae9689ab8fdb416a7a8bf111eb06fcf801e46ef65e57a22323c201\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 24 00:41:18.525984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3654681361.mount: Deactivated successfully. Apr 24 00:41:18.551787 containerd[1570]: time="2026-04-24T00:41:18.551562202Z" level=info msg="Container db2660cce8fa7e9fd74a8a7fe0363a1a7b43cd304461fd89f6769de958ab647f: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:41:18.615271 containerd[1570]: time="2026-04-24T00:41:18.610708689Z" level=info msg="CreateContainer within sandbox \"29c9b8e1e5ae9689ab8fdb416a7a8bf111eb06fcf801e46ef65e57a22323c201\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"db2660cce8fa7e9fd74a8a7fe0363a1a7b43cd304461fd89f6769de958ab647f\"" Apr 24 00:41:18.615271 containerd[1570]: time="2026-04-24T00:41:18.612597154Z" level=info msg="StartContainer for \"db2660cce8fa7e9fd74a8a7fe0363a1a7b43cd304461fd89f6769de958ab647f\"" Apr 24 00:41:18.621831 containerd[1570]: time="2026-04-24T00:41:18.621755861Z" level=info msg="connecting to shim db2660cce8fa7e9fd74a8a7fe0363a1a7b43cd304461fd89f6769de958ab647f" address="unix:///run/containerd/s/95e659de8a4b4f412a261fa2fba5d6d18d60080905262246c92bd6465c1c712d" protocol=ttrpc version=3 Apr 24 00:41:18.621998 kubelet[2842]: E0424 00:41:18.618111 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.564s" Apr 24 00:41:18.993365 systemd[1]: Started cri-containerd-db2660cce8fa7e9fd74a8a7fe0363a1a7b43cd304461fd89f6769de958ab647f.scope - libcontainer container db2660cce8fa7e9fd74a8a7fe0363a1a7b43cd304461fd89f6769de958ab647f. Apr 24 00:41:19.191239 containerd[1570]: time="2026-04-24T00:41:19.190788211Z" level=info msg="StartContainer for \"1e1ef1d7c1adeaa4c1f4e99b102ff658be05f1b0b5e24e046dfec134d51a305b\" returns successfully" Apr 24 00:41:19.586716 containerd[1570]: time="2026-04-24T00:41:19.585330495Z" level=info msg="StartContainer for \"db2660cce8fa7e9fd74a8a7fe0363a1a7b43cd304461fd89f6769de958ab647f\" returns successfully" Apr 24 00:41:19.984170 kubelet[2842]: I0424 00:41:19.979333 2842 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-66975fdd9d-7fq42" podStartSLOduration=162.875617069 podStartE2EDuration="4m0.979208282s" podCreationTimestamp="2026-04-24 00:37:19 +0000 UTC" firstStartedPulling="2026-04-24 00:39:36.875230887 +0000 UTC m=+168.262701352" lastFinishedPulling="2026-04-24 00:40:54.978822109 +0000 UTC m=+246.366292565" observedRunningTime="2026-04-24 00:41:19.973564117 +0000 UTC m=+271.361034598" watchObservedRunningTime="2026-04-24 00:41:19.979208282 +0000 UTC m=+271.366678751" Apr 24 00:41:19.987194 kubelet[2842]: E0424 00:41:19.986624 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:41:19.988529 kubelet[2842]: E0424 00:41:19.988275 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:41:20.289234 kubelet[2842]: I0424 00:41:20.288451 2842 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-66975fdd9d-5jt89" podStartSLOduration=154.454061787 podStartE2EDuration="4m1.288382985s" podCreationTimestamp="2026-04-24 00:37:19 +0000 UTC" firstStartedPulling="2026-04-24 00:39:23.42941375 +0000 UTC m=+154.816884222" lastFinishedPulling="2026-04-24 00:40:50.263734961 +0000 UTC m=+241.651205420" observedRunningTime="2026-04-24 00:41:20.139749037 +0000 UTC m=+271.527219497" watchObservedRunningTime="2026-04-24 00:41:20.288382985 +0000 UTC m=+271.675853462" Apr 24 00:41:20.808967 containerd[1570]: time="2026-04-24T00:41:20.807767853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:41:20.814805 containerd[1570]: time="2026-04-24T00:41:20.812927237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5: active requests=0, bytes read=13498053" Apr 24 00:41:20.817078 containerd[1570]: time="2026-04-24T00:41:20.817036663Z" level=info msg="ImageCreate event name:\"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:41:20.826073 containerd[1570]: time="2026-04-24T00:41:20.825691509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:41:20.833286 containerd[1570]: time="2026-04-24T00:41:20.833086722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" with image id \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\", size \"16459430\" in 25.525685083s" Apr 24 00:41:20.833286 containerd[1570]: time="2026-04-24T00:41:20.833203026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" returns image reference \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\"" Apr 24 00:41:20.899614 containerd[1570]: time="2026-04-24T00:41:20.899568112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\"" Apr 24 00:41:20.929829 containerd[1570]: time="2026-04-24T00:41:20.929732155Z" level=info msg="CreateContainer within sandbox \"0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 00:41:20.996385 containerd[1570]: time="2026-04-24T00:41:20.996202405Z" level=info msg="Container 330365919c62e511110acaa5315c185eee5386a8217b5965f0b328a3abdbdcc6: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:41:20.999740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2459723920.mount: Deactivated successfully. Apr 24 00:41:21.000887 kubelet[2842]: E0424 00:41:21.000587 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:41:21.040717 containerd[1570]: time="2026-04-24T00:41:21.040661002Z" level=info msg="CreateContainer within sandbox \"0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"330365919c62e511110acaa5315c185eee5386a8217b5965f0b328a3abdbdcc6\"" Apr 24 00:41:21.060968 containerd[1570]: time="2026-04-24T00:41:21.058901540Z" level=info msg="StartContainer for \"330365919c62e511110acaa5315c185eee5386a8217b5965f0b328a3abdbdcc6\"" Apr 24 00:41:21.069639 containerd[1570]: time="2026-04-24T00:41:21.068998249Z" level=info msg="connecting to shim 330365919c62e511110acaa5315c185eee5386a8217b5965f0b328a3abdbdcc6" address="unix:///run/containerd/s/678d6184270d1e8aab3dab3be45df04d7986d06899242228fb46f2039e9dbdc8" protocol=ttrpc version=3 Apr 24 00:41:21.202124 systemd-networkd[1483]: vxlan.calico: Link UP Apr 24 00:41:21.202177 systemd-networkd[1483]: vxlan.calico: Gained carrier Apr 24 00:41:21.238467 systemd[1]: Started cri-containerd-330365919c62e511110acaa5315c185eee5386a8217b5965f0b328a3abdbdcc6.scope - libcontainer container 330365919c62e511110acaa5315c185eee5386a8217b5965f0b328a3abdbdcc6. Apr 24 00:41:21.727564 containerd[1570]: time="2026-04-24T00:41:21.719239519Z" level=info msg="StartContainer for \"330365919c62e511110acaa5315c185eee5386a8217b5965f0b328a3abdbdcc6\" returns successfully" Apr 24 00:41:22.290896 kubelet[2842]: I0424 00:41:22.290661 2842 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-p4jvm" podStartSLOduration=117.982248295 podStartE2EDuration="3m59.290602145s" podCreationTimestamp="2026-04-24 00:37:23 +0000 UTC" firstStartedPulling="2026-04-24 00:39:19.557675463 +0000 UTC m=+150.945145919" lastFinishedPulling="2026-04-24 00:41:20.866029314 +0000 UTC m=+272.253499769" observedRunningTime="2026-04-24 00:41:22.289902128 +0000 UTC m=+273.677372593" watchObservedRunningTime="2026-04-24 00:41:22.290602145 +0000 UTC m=+273.678072615" Apr 24 00:41:22.549352 kubelet[2842]: I0424 00:41:22.545766 2842 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 00:41:22.555355 kubelet[2842]: I0424 00:41:22.555283 2842 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 00:41:23.039656 systemd-networkd[1483]: vxlan.calico: Gained IPv6LL Apr 24 00:41:26.179439 kubelet[2842]: E0424 00:41:26.179202 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:41:26.185311 containerd[1570]: time="2026-04-24T00:41:26.185192653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:41:26.186842 containerd[1570]: time="2026-04-24T00:41:26.186443713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.5: active requests=0, bytes read=50078175" Apr 24 00:41:26.190851 containerd[1570]: time="2026-04-24T00:41:26.190635915Z" level=info msg="ImageCreate event name:\"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:41:26.197560 containerd[1570]: time="2026-04-24T00:41:26.197433475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:41:26.201795 containerd[1570]: time="2026-04-24T00:41:26.201674266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" with image id \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\", size \"53039568\" in 5.302005604s" Apr 24 00:41:26.201795 containerd[1570]: time="2026-04-24T00:41:26.201738058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" returns image reference \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\"" Apr 24 00:41:26.205383 containerd[1570]: time="2026-04-24T00:41:26.205365417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\"" Apr 24 00:41:26.292995 containerd[1570]: time="2026-04-24T00:41:26.291979299Z" level=info msg="CreateContainer within sandbox \"efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 00:41:26.335288 containerd[1570]: time="2026-04-24T00:41:26.335222964Z" level=info msg="Container 2f2c86439c798d2e231edd4758681a02d72424710b4e10e16d179a362add54cd: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:41:26.364007 containerd[1570]: time="2026-04-24T00:41:26.363593663Z" level=info msg="CreateContainer within sandbox \"efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2f2c86439c798d2e231edd4758681a02d72424710b4e10e16d179a362add54cd\"" Apr 24 00:41:26.377827 containerd[1570]: time="2026-04-24T00:41:26.377781527Z" level=info msg="StartContainer for \"2f2c86439c798d2e231edd4758681a02d72424710b4e10e16d179a362add54cd\"" Apr 24 00:41:26.385530 containerd[1570]: time="2026-04-24T00:41:26.385377980Z" level=info msg="connecting to shim 2f2c86439c798d2e231edd4758681a02d72424710b4e10e16d179a362add54cd" address="unix:///run/containerd/s/a9eccec487f611f079856a5d0740f22f5e31817ab26290689d69a24ecf65f7d4" protocol=ttrpc version=3 Apr 24 00:41:26.511117 systemd[1]: Started cri-containerd-2f2c86439c798d2e231edd4758681a02d72424710b4e10e16d179a362add54cd.scope - libcontainer container 2f2c86439c798d2e231edd4758681a02d72424710b4e10e16d179a362add54cd. Apr 24 00:41:26.636882 containerd[1570]: time="2026-04-24T00:41:26.636678953Z" level=info msg="StartContainer for \"2f2c86439c798d2e231edd4758681a02d72424710b4e10e16d179a362add54cd\" returns successfully" Apr 24 00:41:28.361728 kubelet[2842]: I0424 00:41:28.356963 2842 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c8bc45cf7-shmzv" podStartSLOduration=143.815802387 podStartE2EDuration="4m4.356944706s" podCreationTimestamp="2026-04-24 00:37:24 +0000 UTC" firstStartedPulling="2026-04-24 00:39:45.663901815 +0000 UTC m=+177.051372274" lastFinishedPulling="2026-04-24 00:41:26.205044131 +0000 UTC m=+277.592514593" observedRunningTime="2026-04-24 00:41:27.873532461 +0000 UTC m=+279.261002933" watchObservedRunningTime="2026-04-24 00:41:28.356944706 +0000 UTC m=+279.744415173" Apr 24 00:41:28.578972 kubelet[2842]: E0424 00:41:28.578918 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:41:28.707310 kubelet[2842]: E0424 00:41:28.706054 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:41:29.464172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount898821844.mount: Deactivated successfully. Apr 24 00:41:29.509554 containerd[1570]: time="2026-04-24T00:41:29.508708220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:41:29.512955 containerd[1570]: time="2026-04-24T00:41:29.510951894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.5: active requests=0, bytes read=17000660" Apr 24 00:41:29.516920 containerd[1570]: time="2026-04-24T00:41:29.516321680Z" level=info msg="ImageCreate event name:\"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:41:29.534691 containerd[1570]: time="2026-04-24T00:41:29.534607674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:41:29.535652 containerd[1570]: time="2026-04-24T00:41:29.535501116Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" with image id \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\", size \"17000490\" in 3.330006502s" Apr 24 00:41:29.535652 containerd[1570]: time="2026-04-24T00:41:29.535529439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" returns image reference \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\"" Apr 24 00:41:29.538163 containerd[1570]: time="2026-04-24T00:41:29.538120480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\"" Apr 24 00:41:29.551834 containerd[1570]: time="2026-04-24T00:41:29.551781140Z" level=info msg="CreateContainer within sandbox \"ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 00:41:29.574443 containerd[1570]: time="2026-04-24T00:41:29.574280228Z" level=info msg="Container 9963862dfa28a126bbcd6a0471690adedad45356154537ce05ad8e55089bc26f: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:41:29.611354 containerd[1570]: time="2026-04-24T00:41:29.611304212Z" level=info msg="CreateContainer within sandbox \"ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9963862dfa28a126bbcd6a0471690adedad45356154537ce05ad8e55089bc26f\"" Apr 24 00:41:29.622369 containerd[1570]: time="2026-04-24T00:41:29.622227239Z" level=info msg="StartContainer for \"9963862dfa28a126bbcd6a0471690adedad45356154537ce05ad8e55089bc26f\"" Apr 24 00:41:29.696139 containerd[1570]: time="2026-04-24T00:41:29.695916012Z" level=info msg="connecting to shim 9963862dfa28a126bbcd6a0471690adedad45356154537ce05ad8e55089bc26f" address="unix:///run/containerd/s/95f41cbc422e0865a84a4f6b033a0be94655562d7e52912d8de954c0a85239bd" protocol=ttrpc version=3 Apr 24 00:41:29.812102 systemd[1]: Started cri-containerd-9963862dfa28a126bbcd6a0471690adedad45356154537ce05ad8e55089bc26f.scope - libcontainer container 9963862dfa28a126bbcd6a0471690adedad45356154537ce05ad8e55089bc26f. Apr 24 00:41:30.013109 containerd[1570]: time="2026-04-24T00:41:30.013059571Z" level=info msg="StartContainer for \"9963862dfa28a126bbcd6a0471690adedad45356154537ce05ad8e55089bc26f\" returns successfully" Apr 24 00:41:30.886986 kubelet[2842]: I0424 00:41:30.886782 2842 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-596bbfbf67-qp7wt" podStartSLOduration=19.419615446999998 podStartE2EDuration="2m27.886728347s" podCreationTimestamp="2026-04-24 00:39:03 +0000 UTC" firstStartedPulling="2026-04-24 00:39:21.070221008 +0000 UTC m=+152.457691473" lastFinishedPulling="2026-04-24 00:41:29.537333917 +0000 UTC m=+280.924804373" observedRunningTime="2026-04-24 00:41:30.875627796 +0000 UTC m=+282.263098255" watchObservedRunningTime="2026-04-24 00:41:30.886728347 +0000 UTC m=+282.274198814" Apr 24 00:41:33.555446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219951953.mount: Deactivated successfully. Apr 24 00:41:34.173803 containerd[1570]: time="2026-04-24T00:41:34.173518631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:41:34.175467 containerd[1570]: time="2026-04-24T00:41:34.175201673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.5: active requests=0, bytes read=53086083" Apr 24 00:41:34.176322 containerd[1570]: time="2026-04-24T00:41:34.176263521Z" level=info msg="ImageCreate event name:\"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:41:34.178462 containerd[1570]: time="2026-04-24T00:41:34.178367631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:41:34.179667 containerd[1570]: time="2026-04-24T00:41:34.179635508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" with image id \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\", size \"53085929\" in 4.641470906s" Apr 24 00:41:34.179714 containerd[1570]: time="2026-04-24T00:41:34.179673077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" returns image reference \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\"" Apr 24 00:41:34.203636 containerd[1570]: time="2026-04-24T00:41:34.203519697Z" level=info msg="CreateContainer within sandbox \"8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 00:41:34.248832 containerd[1570]: time="2026-04-24T00:41:34.248238796Z" level=info msg="Container 9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:41:34.288249 containerd[1570]: time="2026-04-24T00:41:34.288086023Z" level=info msg="CreateContainer within sandbox \"8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\"" Apr 24 00:41:34.289209 containerd[1570]: time="2026-04-24T00:41:34.288960270Z" level=info msg="StartContainer for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\"" Apr 24 00:41:34.290541 containerd[1570]: time="2026-04-24T00:41:34.290484791Z" level=info msg="connecting to shim 9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993" address="unix:///run/containerd/s/116cc232891da72d0df252d7a65cd2764b2f216cde16229cab029f9f44e2e21e" protocol=ttrpc version=3 Apr 24 00:41:34.388148 systemd[1]: Started cri-containerd-9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993.scope - libcontainer container 9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993. Apr 24 00:41:34.489123 containerd[1570]: time="2026-04-24T00:41:34.488806738Z" level=info msg="StartContainer for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" returns successfully" Apr 24 00:41:35.519458 containerd[1570]: time="2026-04-24T00:41:35.517320718Z" level=warning msg="container event discarded" container=335a8f8215bb6ad8724ec2a6b57399a66de39b9cbbe35a30cae686dc7b0fd3af type=CONTAINER_CREATED_EVENT Apr 24 00:41:35.519458 containerd[1570]: time="2026-04-24T00:41:35.519423327Z" level=warning msg="container event discarded" container=335a8f8215bb6ad8724ec2a6b57399a66de39b9cbbe35a30cae686dc7b0fd3af type=CONTAINER_STARTED_EVENT Apr 24 00:41:35.809252 containerd[1570]: time="2026-04-24T00:41:35.808969726Z" level=warning msg="container event discarded" container=e71f5b7eff86de8ad8b88e29f4efc3c66b9214606076b1dd141dda1605a1531c type=CONTAINER_CREATED_EVENT Apr 24 00:41:35.809252 containerd[1570]: time="2026-04-24T00:41:35.809188721Z" level=warning msg="container event discarded" container=e71f5b7eff86de8ad8b88e29f4efc3c66b9214606076b1dd141dda1605a1531c type=CONTAINER_STARTED_EVENT Apr 24 00:41:35.834095 containerd[1570]: time="2026-04-24T00:41:35.830247683Z" level=warning msg="container event discarded" container=c57ceee4282bd3e942797bae4868969cef2d4e88770c579d035321d2d7089226 type=CONTAINER_CREATED_EVENT Apr 24 00:41:35.838759 containerd[1570]: time="2026-04-24T00:41:35.835035906Z" level=warning msg="container event discarded" container=c57ceee4282bd3e942797bae4868969cef2d4e88770c579d035321d2d7089226 type=CONTAINER_STARTED_EVENT Apr 24 00:41:35.838759 containerd[1570]: time="2026-04-24T00:41:35.835342807Z" level=warning msg="container event discarded" container=1a280f85f11c0e3cdcda41d767bc3d927bad254155329620bc33a2a0a495cced type=CONTAINER_CREATED_EVENT Apr 24 00:41:35.890030 containerd[1570]: time="2026-04-24T00:41:35.889531902Z" level=warning msg="container event discarded" container=1bf5dadc87935e4f2907574421ef14bef7bd9f397729877b3d9018f06216ba20 type=CONTAINER_CREATED_EVENT Apr 24 00:41:35.890030 containerd[1570]: time="2026-04-24T00:41:35.890029482Z" level=warning msg="container event discarded" container=14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d type=CONTAINER_CREATED_EVENT Apr 24 00:41:36.137985 containerd[1570]: time="2026-04-24T00:41:36.131699536Z" level=warning msg="container event discarded" container=1a280f85f11c0e3cdcda41d767bc3d927bad254155329620bc33a2a0a495cced type=CONTAINER_STARTED_EVENT Apr 24 00:41:36.193530 containerd[1570]: time="2026-04-24T00:41:36.192604902Z" level=warning msg="container event discarded" container=14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d type=CONTAINER_STARTED_EVENT Apr 24 00:41:36.193530 containerd[1570]: time="2026-04-24T00:41:36.193034644Z" level=warning msg="container event discarded" container=1bf5dadc87935e4f2907574421ef14bef7bd9f397729877b3d9018f06216ba20 type=CONTAINER_STARTED_EVENT Apr 24 00:41:39.488328 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:58104.service - OpenSSH per-connection server daemon (10.0.0.1:58104). Apr 24 00:41:39.960134 sshd[5749]: Accepted publickey for core from 10.0.0.1 port 58104 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:41:39.972384 sshd-session[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:41:40.010052 systemd-logind[1558]: New session 10 of user core. Apr 24 00:41:40.026830 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 00:41:41.935108 sshd[5754]: Connection closed by 10.0.0.1 port 58104 Apr 24 00:41:41.943673 sshd-session[5749]: pam_unix(sshd:session): session closed for user core Apr 24 00:41:42.129233 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:58104.service: Deactivated successfully. Apr 24 00:41:42.135246 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 00:41:42.136688 systemd[1]: session-10.scope: Consumed 1.111s CPU time, 59.9M memory peak. Apr 24 00:41:42.137745 systemd-logind[1558]: Session 10 logged out. Waiting for processes to exit. Apr 24 00:41:42.139369 systemd-logind[1558]: Removed session 10. Apr 24 00:41:47.190938 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:49034.service - OpenSSH per-connection server daemon (10.0.0.1:49034). Apr 24 00:41:47.361740 sshd[5786]: Accepted publickey for core from 10.0.0.1 port 49034 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:41:47.365728 sshd-session[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:41:47.477385 systemd-logind[1558]: New session 11 of user core. Apr 24 00:41:47.504925 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 00:41:48.971282 sshd[5789]: Connection closed by 10.0.0.1 port 49034 Apr 24 00:41:48.972023 sshd-session[5786]: pam_unix(sshd:session): session closed for user core Apr 24 00:41:49.153640 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:49034.service: Deactivated successfully. Apr 24 00:41:49.184624 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 00:41:49.208959 systemd-logind[1558]: Session 11 logged out. Waiting for processes to exit. Apr 24 00:41:49.218366 systemd-logind[1558]: Removed session 11. Apr 24 00:41:54.117687 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:49038.service - OpenSSH per-connection server daemon (10.0.0.1:49038). Apr 24 00:41:54.685036 containerd[1570]: time="2026-04-24T00:41:54.663991547Z" level=warning msg="container event discarded" container=257a5b7d6ca0d79e851cb767825a5cc2f6ebc87187a88c271b8129d966d28da4 type=CONTAINER_CREATED_EVENT Apr 24 00:41:54.685036 containerd[1570]: time="2026-04-24T00:41:54.664512984Z" level=warning msg="container event discarded" container=257a5b7d6ca0d79e851cb767825a5cc2f6ebc87187a88c271b8129d966d28da4 type=CONTAINER_STARTED_EVENT Apr 24 00:41:54.797922 containerd[1570]: time="2026-04-24T00:41:54.797278059Z" level=warning msg="container event discarded" container=d2cf51532e15854fdc427d18a439a3f6fcb16e8a542a149987bf7c8b346ca7da type=CONTAINER_CREATED_EVENT Apr 24 00:41:55.282779 containerd[1570]: time="2026-04-24T00:41:55.282298684Z" level=warning msg="container event discarded" container=d2cf51532e15854fdc427d18a439a3f6fcb16e8a542a149987bf7c8b346ca7da type=CONTAINER_STARTED_EVENT Apr 24 00:41:55.356410 sshd[5852]: Accepted publickey for core from 10.0.0.1 port 49038 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:41:55.372772 sshd-session[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:41:55.575702 systemd-logind[1558]: New session 12 of user core. Apr 24 00:41:55.603594 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 00:41:56.482164 containerd[1570]: time="2026-04-24T00:41:56.426596084Z" level=warning msg="container event discarded" container=29c9b8e1e5ae9689ab8fdb416a7a8bf111eb06fcf801e46ef65e57a22323c201 type=CONTAINER_CREATED_EVENT Apr 24 00:41:56.482164 containerd[1570]: time="2026-04-24T00:41:56.480844791Z" level=warning msg="container event discarded" container=29c9b8e1e5ae9689ab8fdb416a7a8bf111eb06fcf801e46ef65e57a22323c201 type=CONTAINER_STARTED_EVENT Apr 24 00:41:59.164975 sshd[5856]: Connection closed by 10.0.0.1 port 49038 Apr 24 00:41:59.177675 sshd-session[5852]: pam_unix(sshd:session): session closed for user core Apr 24 00:41:59.324489 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:49038.service: Deactivated successfully. Apr 24 00:41:59.382598 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 00:41:59.394338 systemd[1]: session-12.scope: Consumed 2.432s CPU time, 15.7M memory peak. Apr 24 00:41:59.401093 systemd-logind[1558]: Session 12 logged out. Waiting for processes to exit. Apr 24 00:41:59.425562 systemd-logind[1558]: Removed session 12. Apr 24 00:42:04.208016 containerd[1570]: time="2026-04-24T00:42:04.201298952Z" level=warning msg="container event discarded" container=afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd type=CONTAINER_CREATED_EVENT Apr 24 00:42:04.322263 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:35308.service - OpenSSH per-connection server daemon (10.0.0.1:35308). Apr 24 00:42:04.702678 sshd[5931]: Accepted publickey for core from 10.0.0.1 port 35308 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:42:04.711790 sshd-session[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:42:04.723615 containerd[1570]: time="2026-04-24T00:42:04.723115735Z" level=warning msg="container event discarded" container=afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd type=CONTAINER_STARTED_EVENT Apr 24 00:42:04.782497 systemd-logind[1558]: New session 13 of user core. Apr 24 00:42:04.865549 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 00:42:06.256434 kubelet[2842]: E0424 00:42:06.252140 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:42:06.410824 sshd[5934]: Connection closed by 10.0.0.1 port 35308 Apr 24 00:42:06.425200 sshd-session[5931]: pam_unix(sshd:session): session closed for user core Apr 24 00:42:06.470311 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:35308.service: Deactivated successfully. Apr 24 00:42:06.525567 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 00:42:06.527354 systemd[1]: session-13.scope: Consumed 1.077s CPU time, 15.8M memory peak. Apr 24 00:42:06.597568 systemd-logind[1558]: Session 13 logged out. Waiting for processes to exit. Apr 24 00:42:06.599769 systemd-logind[1558]: Removed session 13. Apr 24 00:42:06.971249 kubelet[2842]: I0424 00:42:06.971125 2842 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-7fb6cdc5d9-bxjp2" podStartSLOduration=183.142181069 podStartE2EDuration="4m47.971107693s" podCreationTimestamp="2026-04-24 00:37:19 +0000 UTC" firstStartedPulling="2026-04-24 00:39:49.352268114 +0000 UTC m=+180.739738572" lastFinishedPulling="2026-04-24 00:41:34.181194729 +0000 UTC m=+285.568665196" observedRunningTime="2026-04-24 00:41:34.947453391 +0000 UTC m=+286.334923981" watchObservedRunningTime="2026-04-24 00:42:06.971107693 +0000 UTC m=+318.358578159" Apr 24 00:42:10.280801 kubelet[2842]: E0424 00:42:10.276760 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.005s" Apr 24 00:42:11.589735 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:45590.service - OpenSSH per-connection server daemon (10.0.0.1:45590). Apr 24 00:42:12.253262 sshd[5976]: Accepted publickey for core from 10.0.0.1 port 45590 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:42:12.255167 sshd-session[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:42:12.293107 systemd-logind[1558]: New session 14 of user core. Apr 24 00:42:12.302037 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 00:42:13.405532 sshd[5979]: Connection closed by 10.0.0.1 port 45590 Apr 24 00:42:13.420710 sshd-session[5976]: pam_unix(sshd:session): session closed for user core Apr 24 00:42:13.610674 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:45590.service: Deactivated successfully. Apr 24 00:42:13.642804 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 00:42:13.661158 systemd-logind[1558]: Session 14 logged out. Waiting for processes to exit. Apr 24 00:42:13.665078 systemd-logind[1558]: Removed session 14. Apr 24 00:42:15.137972 kubelet[2842]: E0424 00:42:15.137769 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:42:15.137972 kubelet[2842]: E0424 00:42:15.138031 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:42:18.541014 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:51564.service - OpenSSH per-connection server daemon (10.0.0.1:51564). Apr 24 00:42:19.977071 sshd[5993]: Accepted publickey for core from 10.0.0.1 port 51564 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:42:19.982778 sshd-session[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:42:20.247239 systemd-logind[1558]: New session 15 of user core. Apr 24 00:42:20.278541 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 00:42:23.785142 sshd[5996]: Connection closed by 10.0.0.1 port 51564 Apr 24 00:42:23.798602 sshd-session[5993]: pam_unix(sshd:session): session closed for user core Apr 24 00:42:23.877656 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:51564.service: Deactivated successfully. Apr 24 00:42:23.949215 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 00:42:23.949603 systemd[1]: session-15.scope: Consumed 2.238s CPU time, 14.1M memory peak. Apr 24 00:42:24.013842 containerd[1570]: time="2026-04-24T00:42:24.009235351Z" level=warning msg="container event discarded" container=07b48ba5c45cba73c5e85d185e53b3435d44008796c7c8ec3f29ccda0aee5f89 type=CONTAINER_CREATED_EVENT Apr 24 00:42:24.013842 containerd[1570]: time="2026-04-24T00:42:24.013945717Z" level=warning msg="container event discarded" container=07b48ba5c45cba73c5e85d185e53b3435d44008796c7c8ec3f29ccda0aee5f89 type=CONTAINER_STARTED_EVENT Apr 24 00:42:24.073354 systemd-logind[1558]: Session 15 logged out. Waiting for processes to exit. Apr 24 00:42:24.077065 systemd-logind[1558]: Removed session 15. Apr 24 00:42:24.199132 kubelet[2842]: E0424 00:42:24.198661 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:42:24.945782 containerd[1570]: time="2026-04-24T00:42:24.922648695Z" level=warning msg="container event discarded" container=a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac type=CONTAINER_CREATED_EVENT Apr 24 00:42:24.954538 containerd[1570]: time="2026-04-24T00:42:24.953998791Z" level=warning msg="container event discarded" container=a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac type=CONTAINER_STARTED_EVENT Apr 24 00:42:28.114110 containerd[1570]: time="2026-04-24T00:42:28.112201607Z" level=warning msg="container event discarded" container=6786e0081de57d9a8fc399f338264b2054c11fd8f394c437ccd9fc41edcf76ed type=CONTAINER_CREATED_EVENT Apr 24 00:42:28.304205 containerd[1570]: time="2026-04-24T00:42:28.302465345Z" level=warning msg="container event discarded" container=6786e0081de57d9a8fc399f338264b2054c11fd8f394c437ccd9fc41edcf76ed type=CONTAINER_STARTED_EVENT Apr 24 00:42:28.950410 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:38178.service - OpenSSH per-connection server daemon (10.0.0.1:38178). Apr 24 00:42:29.887287 sshd[6052]: Accepted publickey for core from 10.0.0.1 port 38178 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:42:29.894777 sshd-session[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:42:30.080382 containerd[1570]: time="2026-04-24T00:42:30.067993798Z" level=warning msg="container event discarded" container=a8200e6bcfa148b3ef2539f113364c31e06af6d366cf6ecacb03cff640fb6385 type=CONTAINER_CREATED_EVENT Apr 24 00:42:30.103168 systemd-logind[1558]: New session 16 of user core. Apr 24 00:42:30.120388 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 00:42:30.390940 containerd[1570]: time="2026-04-24T00:42:30.390496346Z" level=warning msg="container event discarded" container=a8200e6bcfa148b3ef2539f113364c31e06af6d366cf6ecacb03cff640fb6385 type=CONTAINER_STARTED_EVENT Apr 24 00:42:30.783489 containerd[1570]: time="2026-04-24T00:42:30.776447345Z" level=warning msg="container event discarded" container=a8200e6bcfa148b3ef2539f113364c31e06af6d366cf6ecacb03cff640fb6385 type=CONTAINER_STOPPED_EVENT Apr 24 00:42:31.528028 sshd[6056]: Connection closed by 10.0.0.1 port 38178 Apr 24 00:42:31.527812 sshd-session[6052]: pam_unix(sshd:session): session closed for user core Apr 24 00:42:31.588329 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:38178.service: Deactivated successfully. Apr 24 00:42:31.607594 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 00:42:31.608607 systemd[1]: session-16.scope: Consumed 1.210s CPU time, 17.5M memory peak. Apr 24 00:42:31.619802 systemd-logind[1558]: Session 16 logged out. Waiting for processes to exit. Apr 24 00:42:31.675954 systemd-logind[1558]: Removed session 16. Apr 24 00:42:35.411145 kubelet[2842]: E0424 00:42:35.410995 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:42:36.855400 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:57152.service - OpenSSH per-connection server daemon (10.0.0.1:57152). Apr 24 00:42:38.065687 sshd[6105]: Accepted publickey for core from 10.0.0.1 port 57152 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:42:38.093424 sshd-session[6105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:42:38.239248 kubelet[2842]: E0424 00:42:38.237379 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.092s" Apr 24 00:42:38.449494 systemd-logind[1558]: New session 17 of user core. Apr 24 00:42:38.455915 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 00:42:40.598110 sshd[6126]: Connection closed by 10.0.0.1 port 57152 Apr 24 00:42:40.610525 sshd-session[6105]: pam_unix(sshd:session): session closed for user core Apr 24 00:42:40.680805 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:57152.service: Deactivated successfully. Apr 24 00:42:40.727152 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 00:42:40.727511 systemd[1]: session-17.scope: Consumed 1.149s CPU time, 16.8M memory peak. Apr 24 00:42:40.889206 systemd-logind[1558]: Session 17 logged out. Waiting for processes to exit. Apr 24 00:42:41.044729 systemd-logind[1558]: Removed session 17. Apr 24 00:42:41.642497 kubelet[2842]: E0424 00:42:41.641811 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:42:45.710160 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:46050.service - OpenSSH per-connection server daemon (10.0.0.1:46050). Apr 24 00:42:46.127007 sshd[6156]: Accepted publickey for core from 10.0.0.1 port 46050 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:42:46.148993 sshd-session[6156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:42:46.347668 systemd-logind[1558]: New session 18 of user core. Apr 24 00:42:46.376174 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 00:42:48.526178 sshd[6159]: Connection closed by 10.0.0.1 port 46050 Apr 24 00:42:48.575991 sshd-session[6156]: pam_unix(sshd:session): session closed for user core Apr 24 00:42:48.784475 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:46050.service: Deactivated successfully. Apr 24 00:42:48.961054 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 00:42:48.981485 systemd[1]: session-18.scope: Consumed 1.416s CPU time, 15.1M memory peak. Apr 24 00:42:49.088332 systemd-logind[1558]: Session 18 logged out. Waiting for processes to exit. Apr 24 00:42:49.253171 systemd-logind[1558]: Removed session 18. Apr 24 00:42:50.486077 kubelet[2842]: E0424 00:42:50.485685 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.166s" Apr 24 00:42:50.501315 kubelet[2842]: E0424 00:42:50.500997 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:42:54.142365 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:46054.service - OpenSSH per-connection server daemon (10.0.0.1:46054). Apr 24 00:42:57.408297 sshd[6219]: Accepted publickey for core from 10.0.0.1 port 46054 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:42:57.504523 sshd-session[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:42:58.282392 systemd-logind[1558]: New session 19 of user core. Apr 24 00:42:58.336076 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 00:42:59.439656 containerd[1570]: time="2026-04-24T00:42:59.437117842Z" level=error msg="ExecSync for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Apr 24 00:43:00.083059 kubelet[2842]: E0424 00:43:00.064818 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993" cmd=["/health","-live"] Apr 24 00:43:00.195942 kubelet[2842]: E0424 00:43:00.194347 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.054s" Apr 24 00:43:03.521832 kubelet[2842]: E0424 00:43:03.520962 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.327s" Apr 24 00:43:06.143468 sshd[6224]: Connection closed by 10.0.0.1 port 46054 Apr 24 00:43:06.166428 sshd-session[6219]: pam_unix(sshd:session): session closed for user core Apr 24 00:43:06.390742 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:46054.service: Deactivated successfully. Apr 24 00:43:06.394798 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:46054.service: Consumed 1.311s CPU time, 3.2M memory peak. Apr 24 00:43:06.591308 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 00:43:06.595206 systemd[1]: session-19.scope: Consumed 5.492s CPU time, 14.7M memory peak. Apr 24 00:43:06.726518 systemd-logind[1558]: Session 19 logged out. Waiting for processes to exit. Apr 24 00:43:06.872470 systemd-logind[1558]: Removed session 19. Apr 24 00:43:08.698716 containerd[1570]: time="2026-04-24T00:43:08.692831855Z" level=warning msg="container event discarded" container=e14bf95c66a9021fa96a55316a15156e7262c4bee2b8aa9981dc8b91b0b256d7 type=CONTAINER_CREATED_EVENT Apr 24 00:43:09.415060 containerd[1570]: time="2026-04-24T00:43:09.399710913Z" level=warning msg="container event discarded" container=e14bf95c66a9021fa96a55316a15156e7262c4bee2b8aa9981dc8b91b0b256d7 type=CONTAINER_STARTED_EVENT Apr 24 00:43:10.640796 kubelet[2842]: E0424 00:43:10.598478 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.064s" Apr 24 00:43:11.420087 containerd[1570]: time="2026-04-24T00:43:11.387783084Z" level=warning msg="container event discarded" container=e14bf95c66a9021fa96a55316a15156e7262c4bee2b8aa9981dc8b91b0b256d7 type=CONTAINER_STOPPED_EVENT Apr 24 00:43:11.709276 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:47550.service - OpenSSH per-connection server daemon (10.0.0.1:47550). Apr 24 00:43:12.742353 kubelet[2842]: E0424 00:43:12.733540 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.946s" Apr 24 00:43:13.813052 sshd[6296]: Accepted publickey for core from 10.0.0.1 port 47550 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:43:13.994353 sshd-session[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:43:14.967487 systemd-logind[1558]: New session 20 of user core. Apr 24 00:43:14.990630 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 00:43:19.583290 containerd[1570]: time="2026-04-24T00:43:19.548628044Z" level=error msg="ExecSync for \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 24 00:43:20.254176 kubelet[2842]: E0424 00:43:20.187389 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 24 00:43:21.464502 kubelet[2842]: E0424 00:43:21.464362 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.71s" Apr 24 00:43:22.615222 sshd[6299]: Connection closed by 10.0.0.1 port 47550 Apr 24 00:43:22.645166 sshd-session[6296]: pam_unix(sshd:session): session closed for user core Apr 24 00:43:23.178968 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:47550.service: Deactivated successfully. Apr 24 00:43:23.194349 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:47550.service: Consumed 1.022s CPU time, 3.2M memory peak. Apr 24 00:43:23.413211 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 00:43:23.428324 systemd[1]: session-20.scope: Consumed 3.864s CPU time, 14.8M memory peak. Apr 24 00:43:23.518468 systemd-logind[1558]: Session 20 logged out. Waiting for processes to exit. Apr 24 00:43:23.600409 systemd-logind[1558]: Removed session 20. Apr 24 00:43:25.173069 kubelet[2842]: E0424 00:43:25.161986 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.444s" Apr 24 00:43:25.458457 kubelet[2842]: E0424 00:43:25.457396 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:43:25.668171 kubelet[2842]: E0424 00:43:25.650013 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:43:27.887510 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:36516.service - OpenSSH per-connection server daemon (10.0.0.1:36516). Apr 24 00:43:29.708110 sshd[6321]: Accepted publickey for core from 10.0.0.1 port 36516 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:43:29.890692 sshd-session[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:43:30.707222 systemd-logind[1558]: New session 21 of user core. Apr 24 00:43:30.730947 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 00:43:34.616592 containerd[1570]: time="2026-04-24T00:43:34.368766584Z" level=error msg="ExecSync for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Apr 24 00:43:36.686168 containerd[1570]: time="2026-04-24T00:43:36.676055852Z" level=warning msg="container event discarded" container=6aa9d7eb7b3a698433a1d238cb642909ffa81ff1e18798400920675af0614dcb type=CONTAINER_CREATED_EVENT Apr 24 00:43:37.556009 containerd[1570]: time="2026-04-24T00:43:37.552422521Z" level=warning msg="container event discarded" container=6aa9d7eb7b3a698433a1d238cb642909ffa81ff1e18798400920675af0614dcb type=CONTAINER_STARTED_EVENT Apr 24 00:43:38.113334 sshd[6325]: Connection closed by 10.0.0.1 port 36516 Apr 24 00:43:38.124589 sshd-session[6321]: pam_unix(sshd:session): session closed for user core Apr 24 00:43:38.488415 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:36516.service: Deactivated successfully. Apr 24 00:43:38.550145 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 00:43:38.550440 systemd[1]: session-21.scope: Consumed 4.665s CPU time, 14.4M memory peak. Apr 24 00:43:38.581389 systemd-logind[1558]: Session 21 logged out. Waiting for processes to exit. Apr 24 00:43:38.797223 kubelet[2842]: E0424 00:43:38.791999 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993" cmd=["/health","-ready"] Apr 24 00:43:38.805469 systemd-logind[1558]: Removed session 21. Apr 24 00:43:39.602028 kubelet[2842]: E0424 00:43:39.089831 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.918s" Apr 24 00:43:43.629651 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:54006.service - OpenSSH per-connection server daemon (10.0.0.1:54006). Apr 24 00:43:43.760074 kubelet[2842]: E0424 00:43:43.753550 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:43:44.196428 kubelet[2842]: E0424 00:43:44.183500 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:43:45.648072 containerd[1570]: time="2026-04-24T00:43:45.540424046Z" level=warning msg="container event discarded" container=6aa9d7eb7b3a698433a1d238cb642909ffa81ff1e18798400920675af0614dcb type=CONTAINER_STOPPED_EVENT Apr 24 00:43:46.621002 sshd[6364]: Accepted publickey for core from 10.0.0.1 port 54006 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:43:46.633447 sshd-session[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:43:47.441001 systemd-logind[1558]: New session 22 of user core. Apr 24 00:43:47.444846 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 24 00:43:49.393056 containerd[1570]: time="2026-04-24T00:43:49.390281530Z" level=warning msg="container event discarded" container=79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e type=CONTAINER_CREATED_EVENT Apr 24 00:43:52.203483 sshd[6379]: Connection closed by 10.0.0.1 port 54006 Apr 24 00:43:52.301542 sshd-session[6364]: pam_unix(sshd:session): session closed for user core Apr 24 00:43:52.399309 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:54006.service: Deactivated successfully. Apr 24 00:43:52.505216 systemd[1]: session-22.scope: Deactivated successfully. Apr 24 00:43:52.511106 systemd[1]: session-22.scope: Consumed 3.123s CPU time, 15.8M memory peak. Apr 24 00:43:52.563833 systemd-logind[1558]: Session 22 logged out. Waiting for processes to exit. Apr 24 00:43:52.670560 systemd-logind[1558]: Removed session 22. Apr 24 00:43:52.795517 containerd[1570]: time="2026-04-24T00:43:52.792561341Z" level=warning msg="container event discarded" container=79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e type=CONTAINER_STARTED_EVENT Apr 24 00:43:55.520487 systemd[1]: cri-containerd-db2660cce8fa7e9fd74a8a7fe0363a1a7b43cd304461fd89f6769de958ab647f.scope: Deactivated successfully. Apr 24 00:43:55.523354 systemd[1]: cri-containerd-db2660cce8fa7e9fd74a8a7fe0363a1a7b43cd304461fd89f6769de958ab647f.scope: Consumed 12.522s CPU time, 139.5M memory peak, 66.2M read from disk. Apr 24 00:43:55.840414 containerd[1570]: time="2026-04-24T00:43:55.840090516Z" level=info msg="received container exit event container_id:\"db2660cce8fa7e9fd74a8a7fe0363a1a7b43cd304461fd89f6769de958ab647f\" id:\"db2660cce8fa7e9fd74a8a7fe0363a1a7b43cd304461fd89f6769de958ab647f\" pid:5327 exit_status:1 exited_at:{seconds:1776991435 nanos:654439496}" Apr 24 00:43:57.687826 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:52040.service - OpenSSH per-connection server daemon (10.0.0.1:52040). Apr 24 00:43:59.541734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db2660cce8fa7e9fd74a8a7fe0363a1a7b43cd304461fd89f6769de958ab647f-rootfs.mount: Deactivated successfully. Apr 24 00:44:00.446489 kubelet[2842]: E0424 00:44:00.445681 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="20.818s" Apr 24 00:44:00.498724 sshd[6467]: Accepted publickey for core from 10.0.0.1 port 52040 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:44:00.560026 sshd-session[6467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:44:01.018697 systemd-logind[1558]: New session 23 of user core. Apr 24 00:44:01.118525 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 24 00:44:03.209117 kubelet[2842]: I0424 00:44:03.208397 2842 scope.go:122] "RemoveContainer" containerID="afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd" Apr 24 00:44:04.871990 kubelet[2842]: E0424 00:44:04.869179 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:44:05.476271 kubelet[2842]: E0424 00:44:05.475941 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:44:05.869961 containerd[1570]: time="2026-04-24T00:44:05.865248566Z" level=info msg="RemoveContainer for \"afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd\"" Apr 24 00:44:05.914129 sshd[6479]: Connection closed by 10.0.0.1 port 52040 Apr 24 00:44:05.998463 sshd-session[6467]: pam_unix(sshd:session): session closed for user core Apr 24 00:44:06.162304 kubelet[2842]: E0424 00:44:06.155438 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.391s" Apr 24 00:44:06.279471 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:52040.service: Deactivated successfully. Apr 24 00:44:06.427285 systemd[1]: session-23.scope: Deactivated successfully. Apr 24 00:44:06.428752 systemd[1]: session-23.scope: Consumed 3.218s CPU time, 18.4M memory peak. Apr 24 00:44:06.462282 systemd-logind[1558]: Session 23 logged out. Waiting for processes to exit. Apr 24 00:44:06.590304 systemd-logind[1558]: Removed session 23. Apr 24 00:44:06.908128 containerd[1570]: time="2026-04-24T00:44:06.904616423Z" level=info msg="RemoveContainer for \"afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd\" returns successfully" Apr 24 00:44:07.145328 containerd[1570]: time="2026-04-24T00:44:07.144446210Z" level=error msg="ExecSync for \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 24 00:44:07.164002 containerd[1570]: time="2026-04-24T00:44:07.161365535Z" level=error msg="ContainerStatus for \"afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd\": not found" Apr 24 00:44:07.260429 kubelet[2842]: E0424 00:44:07.260186 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 24 00:44:07.306766 kubelet[2842]: E0424 00:44:07.259334 2842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd\": not found" containerID="afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd" Apr 24 00:44:07.846715 kubelet[2842]: E0424 00:44:07.846623 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.691s" Apr 24 00:44:08.289665 kubelet[2842]: I0424 00:44:08.289392 2842 scope.go:122] "RemoveContainer" containerID="db2660cce8fa7e9fd74a8a7fe0363a1a7b43cd304461fd89f6769de958ab647f" Apr 24 00:44:08.291823 kubelet[2842]: E0424 00:44:08.291238 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:44:09.335106 containerd[1570]: time="2026-04-24T00:44:09.330768226Z" level=info msg="CreateContainer within sandbox \"29c9b8e1e5ae9689ab8fdb416a7a8bf111eb06fcf801e46ef65e57a22323c201\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Apr 24 00:44:09.625715 containerd[1570]: time="2026-04-24T00:44:09.616682149Z" level=info msg="Container e3457ef12f723e3fb3f77a54e5294e5306d7f99a671c6660887326a756ab9ea8: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:44:10.044408 kubelet[2842]: E0424 00:44:10.041228 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:44:10.087358 containerd[1570]: time="2026-04-24T00:44:10.087254088Z" level=info msg="CreateContainer within sandbox \"29c9b8e1e5ae9689ab8fdb416a7a8bf111eb06fcf801e46ef65e57a22323c201\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"e3457ef12f723e3fb3f77a54e5294e5306d7f99a671c6660887326a756ab9ea8\"" Apr 24 00:44:10.265143 kubelet[2842]: E0424 00:44:10.265050 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.91s" Apr 24 00:44:10.486539 containerd[1570]: time="2026-04-24T00:44:10.464176718Z" level=info msg="StartContainer for \"e3457ef12f723e3fb3f77a54e5294e5306d7f99a671c6660887326a756ab9ea8\"" Apr 24 00:44:10.615418 containerd[1570]: time="2026-04-24T00:44:10.615245035Z" level=info msg="connecting to shim e3457ef12f723e3fb3f77a54e5294e5306d7f99a671c6660887326a756ab9ea8" address="unix:///run/containerd/s/95e659de8a4b4f412a261fa2fba5d6d18d60080905262246c92bd6465c1c712d" protocol=ttrpc version=3 Apr 24 00:44:11.199242 systemd[1]: Started sshd@23-10.0.0.92:22-10.0.0.1:38590.service - OpenSSH per-connection server daemon (10.0.0.1:38590). Apr 24 00:44:12.710675 containerd[1570]: time="2026-04-24T00:44:12.697314085Z" level=error msg="ExecSync for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Apr 24 00:44:12.829036 kubelet[2842]: E0424 00:44:12.804470 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993" cmd=["/health","-live"] Apr 24 00:44:13.681102 sshd[6517]: Accepted publickey for core from 10.0.0.1 port 38590 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:44:13.909569 sshd-session[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:44:14.382080 systemd-logind[1558]: New session 24 of user core. Apr 24 00:44:14.432293 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 24 00:44:17.178565 systemd[1]: Started cri-containerd-e3457ef12f723e3fb3f77a54e5294e5306d7f99a671c6660887326a756ab9ea8.scope - libcontainer container e3457ef12f723e3fb3f77a54e5294e5306d7f99a671c6660887326a756ab9ea8. Apr 24 00:44:19.634003 containerd[1570]: time="2026-04-24T00:44:19.557469735Z" level=warning msg="container event discarded" container=0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816 type=CONTAINER_CREATED_EVENT Apr 24 00:44:19.669356 containerd[1570]: time="2026-04-24T00:44:19.650133740Z" level=warning msg="container event discarded" container=0a630e874c2001957391dcb003893f0a78e82944e4b12a33d55c9509bbdaf816 type=CONTAINER_STARTED_EVENT Apr 24 00:44:19.776050 containerd[1570]: time="2026-04-24T00:44:19.654353405Z" level=error msg="ExecSync for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Apr 24 00:44:21.159817 containerd[1570]: time="2026-04-24T00:44:21.121450044Z" level=warning msg="container event discarded" container=ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2 type=CONTAINER_CREATED_EVENT Apr 24 00:44:21.289905 containerd[1570]: time="2026-04-24T00:44:21.289118732Z" level=warning msg="container event discarded" container=ea5d0830eee7029669aba864af165d7ce996792dc6c62097819effa7d77df3d2 type=CONTAINER_STARTED_EVENT Apr 24 00:44:21.688801 sshd[6535]: Connection closed by 10.0.0.1 port 38590 Apr 24 00:44:21.699773 sshd-session[6517]: pam_unix(sshd:session): session closed for user core Apr 24 00:44:21.944249 systemd[1]: sshd@23-10.0.0.92:22-10.0.0.1:38590.service: Deactivated successfully. Apr 24 00:44:22.173237 systemd[1]: session-24.scope: Deactivated successfully. Apr 24 00:44:22.181775 systemd[1]: session-24.scope: Consumed 3.813s CPU time, 15.3M memory peak. Apr 24 00:44:22.288947 systemd-logind[1558]: Session 24 logged out. Waiting for processes to exit. Apr 24 00:44:22.305274 systemd-logind[1558]: Removed session 24. Apr 24 00:44:23.179072 kubelet[2842]: E0424 00:44:23.165421 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993" cmd=["/health","-ready"] Apr 24 00:44:23.346659 containerd[1570]: time="2026-04-24T00:44:23.345515023Z" level=warning msg="container event discarded" container=03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653 type=CONTAINER_CREATED_EVENT Apr 24 00:44:23.355562 containerd[1570]: time="2026-04-24T00:44:23.346780680Z" level=warning msg="container event discarded" container=03193224a8984aff89fb25b57ea4afc53f2e0486d6504980faaba3c782348653 type=CONTAINER_STARTED_EVENT Apr 24 00:44:24.743301 kubelet[2842]: E0424 00:44:24.743071 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:44:26.137240 containerd[1570]: time="2026-04-24T00:44:26.115757478Z" level=warning msg="container event discarded" container=7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6 type=CONTAINER_CREATED_EVENT Apr 24 00:44:26.153761 containerd[1570]: time="2026-04-24T00:44:26.137653957Z" level=warning msg="container event discarded" container=7d7c4196e35647127e88d0ab36f7b19afc59bf18727ba01c92073e5a07085cb6 type=CONTAINER_STARTED_EVENT Apr 24 00:44:26.851486 containerd[1570]: time="2026-04-24T00:44:26.849378402Z" level=warning msg="container event discarded" container=64127005e1a800842e606bf98326dc86a71aefa6574f1ae679d41a5938ca3616 type=CONTAINER_CREATED_EVENT Apr 24 00:44:26.987210 systemd[1]: Started sshd@24-10.0.0.92:22-10.0.0.1:33488.service - OpenSSH per-connection server daemon (10.0.0.1:33488). Apr 24 00:44:27.994184 sshd[6579]: Accepted publickey for core from 10.0.0.1 port 33488 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:44:27.996843 sshd-session[6579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:44:28.163953 containerd[1570]: time="2026-04-24T00:44:28.116547469Z" level=error msg="get state for e3457ef12f723e3fb3f77a54e5294e5306d7f99a671c6660887326a756ab9ea8" error="context deadline exceeded" Apr 24 00:44:28.198335 containerd[1570]: time="2026-04-24T00:44:28.182757250Z" level=warning msg="unknown status" status=0 Apr 24 00:44:28.602196 systemd-logind[1558]: New session 25 of user core. Apr 24 00:44:28.925746 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 24 00:44:29.353086 containerd[1570]: time="2026-04-24T00:44:29.344009381Z" level=warning msg="container event discarded" container=f5b1aaeaf96515053f35ff83f402a18feea7a9d4c5f4765703983b5891fdba61 type=CONTAINER_CREATED_EVENT Apr 24 00:44:30.461731 kubelet[2842]: E0424 00:44:30.397388 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="19.261s" Apr 24 00:44:31.444789 containerd[1570]: time="2026-04-24T00:44:31.444401386Z" level=error msg="get state for e3457ef12f723e3fb3f77a54e5294e5306d7f99a671c6660887326a756ab9ea8" error="context deadline exceeded" Apr 24 00:44:31.444789 containerd[1570]: time="2026-04-24T00:44:31.444624568Z" level=warning msg="unknown status" status=0 Apr 24 00:44:31.691934 kubelet[2842]: E0424 00:44:31.689614 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:44:31.977692 containerd[1570]: time="2026-04-24T00:44:31.976079452Z" level=warning msg="container event discarded" container=64127005e1a800842e606bf98326dc86a71aefa6574f1ae679d41a5938ca3616 type=CONTAINER_STARTED_EVENT Apr 24 00:44:32.706187 sshd[6584]: Connection closed by 10.0.0.1 port 33488 Apr 24 00:44:32.733390 sshd-session[6579]: pam_unix(sshd:session): session closed for user core Apr 24 00:44:33.101022 systemd[1]: sshd@24-10.0.0.92:22-10.0.0.1:33488.service: Deactivated successfully. Apr 24 00:44:33.309525 systemd[1]: session-25.scope: Deactivated successfully. Apr 24 00:44:33.317328 systemd[1]: session-25.scope: Consumed 2.194s CPU time, 16.4M memory peak. Apr 24 00:44:33.384695 systemd-logind[1558]: Session 25 logged out. Waiting for processes to exit. Apr 24 00:44:33.515757 systemd-logind[1558]: Removed session 25. Apr 24 00:44:33.923110 containerd[1570]: time="2026-04-24T00:44:33.894736485Z" level=error msg="get state for e3457ef12f723e3fb3f77a54e5294e5306d7f99a671c6660887326a756ab9ea8" error="context deadline exceeded" Apr 24 00:44:33.923110 containerd[1570]: time="2026-04-24T00:44:33.918060057Z" level=warning msg="unknown status" status=0 Apr 24 00:44:34.602206 containerd[1570]: time="2026-04-24T00:44:34.599509997Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 24 00:44:34.611205 containerd[1570]: time="2026-04-24T00:44:34.609099730Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 24 00:44:34.673278 containerd[1570]: time="2026-04-24T00:44:34.619073367Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 24 00:44:36.597381 containerd[1570]: time="2026-04-24T00:44:36.509813824Z" level=warning msg="container event discarded" container=29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144 type=CONTAINER_CREATED_EVENT Apr 24 00:44:36.613208 containerd[1570]: time="2026-04-24T00:44:36.604191606Z" level=warning msg="container event discarded" container=29e7230b41c4df653cba2ea5b25a5059f1b4a6feba001a71bd9fd845fa9f7144 type=CONTAINER_STARTED_EVENT Apr 24 00:44:37.461201 containerd[1570]: time="2026-04-24T00:44:37.460140802Z" level=info msg="StartContainer for \"e3457ef12f723e3fb3f77a54e5294e5306d7f99a671c6660887326a756ab9ea8\" returns successfully" Apr 24 00:44:37.709180 kubelet[2842]: E0424 00:44:37.653379 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.169s" Apr 24 00:44:38.098526 systemd[1]: Started sshd@25-10.0.0.92:22-10.0.0.1:50622.service - OpenSSH per-connection server daemon (10.0.0.1:50622). Apr 24 00:44:38.466831 kubelet[2842]: E0424 00:44:38.464987 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:44:39.830903 sshd[6605]: Accepted publickey for core from 10.0.0.1 port 50622 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:44:39.878479 sshd-session[6605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:44:40.559315 containerd[1570]: time="2026-04-24T00:44:40.499533353Z" level=warning msg="container event discarded" container=f5b1aaeaf96515053f35ff83f402a18feea7a9d4c5f4765703983b5891fdba61 type=CONTAINER_STARTED_EVENT Apr 24 00:44:40.585691 systemd-logind[1558]: New session 26 of user core. Apr 24 00:44:40.609104 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 24 00:44:42.111049 kubelet[2842]: E0424 00:44:42.109719 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.36s" Apr 24 00:44:43.656382 sshd[6614]: Connection closed by 10.0.0.1 port 50622 Apr 24 00:44:43.665242 sshd-session[6605]: pam_unix(sshd:session): session closed for user core Apr 24 00:44:43.872760 systemd[1]: Started sshd@26-10.0.0.92:22-10.0.0.1:50624.service - OpenSSH per-connection server daemon (10.0.0.1:50624). Apr 24 00:44:43.878717 systemd[1]: sshd@25-10.0.0.92:22-10.0.0.1:50622.service: Deactivated successfully. Apr 24 00:44:43.901505 systemd[1]: session-26.scope: Deactivated successfully. Apr 24 00:44:43.905710 systemd[1]: session-26.scope: Consumed 1.702s CPU time, 13.7M memory peak. Apr 24 00:44:43.971020 systemd-logind[1558]: Session 26 logged out. Waiting for processes to exit. Apr 24 00:44:43.976760 systemd-logind[1558]: Removed session 26. Apr 24 00:44:44.056125 containerd[1570]: time="2026-04-24T00:44:44.052528260Z" level=warning msg="container event discarded" container=1426e5c6f4a2f9fcd83e51251f8798f6fbf768c172b799a3284c464a06729bec type=CONTAINER_CREATED_EVENT Apr 24 00:44:44.207665 kubelet[2842]: E0424 00:44:44.203911 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2s" Apr 24 00:44:45.304130 sshd[6628]: Accepted publickey for core from 10.0.0.1 port 50624 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:44:45.372745 sshd-session[6628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:44:45.618948 containerd[1570]: time="2026-04-24T00:44:45.595561397Z" level=warning msg="container event discarded" container=efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14 type=CONTAINER_CREATED_EVENT Apr 24 00:44:45.630542 containerd[1570]: time="2026-04-24T00:44:45.616461546Z" level=warning msg="container event discarded" container=efaa6bd6701f5b687509b76e15e34df370add75df98df9426121572f5db14b14 type=CONTAINER_STARTED_EVENT Apr 24 00:44:45.707945 containerd[1570]: time="2026-04-24T00:44:45.707021109Z" level=warning msg="container event discarded" container=1426e5c6f4a2f9fcd83e51251f8798f6fbf768c172b799a3284c464a06729bec type=CONTAINER_STARTED_EVENT Apr 24 00:44:45.711359 systemd-logind[1558]: New session 27 of user core. Apr 24 00:44:45.716262 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 24 00:44:46.280196 kubelet[2842]: E0424 00:44:46.278571 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.073s" Apr 24 00:44:47.919931 kubelet[2842]: E0424 00:44:47.913156 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:44:49.312223 containerd[1570]: time="2026-04-24T00:44:49.305968179Z" level=warning msg="container event discarded" container=8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2 type=CONTAINER_CREATED_EVENT Apr 24 00:44:49.328324 containerd[1570]: time="2026-04-24T00:44:49.316625900Z" level=warning msg="container event discarded" container=8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2 type=CONTAINER_STARTED_EVENT Apr 24 00:44:51.642485 kubelet[2842]: E0424 00:44:51.642165 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:44:53.977377 containerd[1570]: time="2026-04-24T00:44:53.975443937Z" level=warning msg="container event discarded" container=87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c type=CONTAINER_CREATED_EVENT Apr 24 00:44:54.015223 containerd[1570]: time="2026-04-24T00:44:54.007968667Z" level=warning msg="container event discarded" container=87f5aaef1782770458b23c5409352e866cd15043842e49d2080114ae70a4e77c type=CONTAINER_STARTED_EVENT Apr 24 00:44:54.340236 sshd[6636]: Connection closed by 10.0.0.1 port 50624 Apr 24 00:44:54.371484 sshd-session[6628]: pam_unix(sshd:session): session closed for user core Apr 24 00:44:54.513385 systemd[1]: Started sshd@27-10.0.0.92:22-10.0.0.1:43056.service - OpenSSH per-connection server daemon (10.0.0.1:43056). Apr 24 00:44:54.821474 systemd[1]: sshd@26-10.0.0.92:22-10.0.0.1:50624.service: Deactivated successfully. Apr 24 00:44:54.988660 systemd[1]: session-27.scope: Deactivated successfully. Apr 24 00:44:54.990307 systemd[1]: session-27.scope: Consumed 2.942s CPU time, 24.9M memory peak. Apr 24 00:44:55.389073 systemd-logind[1558]: Session 27 logged out. Waiting for processes to exit. Apr 24 00:44:55.575310 systemd-logind[1558]: Removed session 27. Apr 24 00:44:57.455721 containerd[1570]: time="2026-04-24T00:44:57.455508263Z" level=error msg="ExecSync for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Apr 24 00:44:57.604104 kubelet[2842]: E0424 00:44:57.604008 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993" cmd=["/health","-ready"] Apr 24 00:44:57.902069 sshd[6671]: Accepted publickey for core from 10.0.0.1 port 43056 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:44:58.099323 sshd-session[6671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:44:58.347523 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 24 00:44:58.351930 systemd-logind[1558]: New session 28 of user core. Apr 24 00:45:00.156113 containerd[1570]: time="2026-04-24T00:45:00.153406727Z" level=warning msg="container event discarded" container=6fb8e2210a6459ba7959800891d5c52cdc8cf7019aeec4daa39e21ca19d25a0a type=CONTAINER_CREATED_EVENT Apr 24 00:45:04.912715 containerd[1570]: time="2026-04-24T00:45:04.901735367Z" level=warning msg="container event discarded" container=6fb8e2210a6459ba7959800891d5c52cdc8cf7019aeec4daa39e21ca19d25a0a type=CONTAINER_STARTED_EVENT Apr 24 00:45:06.468156 sshd[6679]: Connection closed by 10.0.0.1 port 43056 Apr 24 00:45:06.476711 sshd-session[6671]: pam_unix(sshd:session): session closed for user core Apr 24 00:45:06.960093 systemd[1]: sshd@27-10.0.0.92:22-10.0.0.1:43056.service: Deactivated successfully. Apr 24 00:45:06.961498 systemd[1]: sshd@27-10.0.0.92:22-10.0.0.1:43056.service: Consumed 1.131s CPU time, 3.2M memory peak. Apr 24 00:45:07.062899 systemd[1]: session-28.scope: Deactivated successfully. Apr 24 00:45:07.065769 systemd[1]: session-28.scope: Consumed 4.090s CPU time, 15.6M memory peak. Apr 24 00:45:07.173677 systemd-logind[1558]: Session 28 logged out. Waiting for processes to exit. Apr 24 00:45:07.275752 systemd-logind[1558]: Removed session 28. Apr 24 00:45:08.867755 kubelet[2842]: E0424 00:45:08.867431 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.803s" Apr 24 00:45:10.272914 kubelet[2842]: E0424 00:45:10.271829 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.383s" Apr 24 00:45:11.741165 systemd[1]: Started sshd@28-10.0.0.92:22-10.0.0.1:57722.service - OpenSSH per-connection server daemon (10.0.0.1:57722). Apr 24 00:45:12.230771 containerd[1570]: time="2026-04-24T00:45:12.230615129Z" level=error msg="ExecSync for \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 24 00:45:12.274220 kubelet[2842]: E0424 00:45:12.256358 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 24 00:45:13.443425 sshd[6729]: Accepted publickey for core from 10.0.0.1 port 57722 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:45:13.664737 sshd-session[6729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:45:13.861101 kubelet[2842]: E0424 00:45:13.861065 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.735s" Apr 24 00:45:13.917803 systemd-logind[1558]: New session 29 of user core. Apr 24 00:45:13.972377 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 24 00:45:15.426334 sshd[6746]: Connection closed by 10.0.0.1 port 57722 Apr 24 00:45:15.429183 sshd-session[6729]: pam_unix(sshd:session): session closed for user core Apr 24 00:45:15.536472 systemd[1]: sshd@28-10.0.0.92:22-10.0.0.1:57722.service: Deactivated successfully. Apr 24 00:45:15.540231 systemd[1]: session-29.scope: Deactivated successfully. Apr 24 00:45:15.542034 systemd-logind[1558]: Session 29 logged out. Waiting for processes to exit. Apr 24 00:45:15.562842 systemd-logind[1558]: Removed session 29. Apr 24 00:45:16.189088 kubelet[2842]: I0424 00:45:16.188245 2842 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 00:45:17.901620 kubelet[2842]: E0424 00:45:17.885808 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:45:19.458206 kubelet[2842]: E0424 00:45:19.458172 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.225s" Apr 24 00:45:19.589338 kubelet[2842]: E0424 00:45:19.589159 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:45:20.824469 systemd[1]: Started sshd@29-10.0.0.92:22-10.0.0.1:47354.service - OpenSSH per-connection server daemon (10.0.0.1:47354). Apr 24 00:45:21.584573 sshd[6820]: Accepted publickey for core from 10.0.0.1 port 47354 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:45:21.643085 sshd-session[6820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:45:21.828158 systemd-logind[1558]: New session 30 of user core. Apr 24 00:45:21.860270 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 24 00:45:22.914614 containerd[1570]: time="2026-04-24T00:45:22.914131753Z" level=info msg="StopContainer for \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" with timeout 2 (s)" Apr 24 00:45:23.142017 containerd[1570]: time="2026-04-24T00:45:23.141773441Z" level=info msg="Stop container \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" with signal terminated" Apr 24 00:45:23.379812 kubelet[2842]: E0424 00:45:23.379724 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.08s" Apr 24 00:45:24.262929 sshd[6823]: Connection closed by 10.0.0.1 port 47354 Apr 24 00:45:24.267763 sshd-session[6820]: pam_unix(sshd:session): session closed for user core Apr 24 00:45:24.475730 systemd[1]: sshd@29-10.0.0.92:22-10.0.0.1:47354.service: Deactivated successfully. Apr 24 00:45:24.592216 kubelet[2842]: E0424 00:45:24.591229 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.172s" Apr 24 00:45:24.610399 systemd[1]: session-30.scope: Deactivated successfully. Apr 24 00:45:24.611005 systemd[1]: session-30.scope: Consumed 1.630s CPU time, 15.6M memory peak. Apr 24 00:45:24.726650 systemd-logind[1558]: Session 30 logged out. Waiting for processes to exit. Apr 24 00:45:24.728085 systemd-logind[1558]: Removed session 30. Apr 24 00:45:25.855125 systemd[1]: cri-containerd-79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e.scope: Deactivated successfully. Apr 24 00:45:25.859504 systemd[1]: cri-containerd-79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e.scope: Consumed 1min 5.843s CPU time, 355M memory peak, 79.6M read from disk, 1.1M written to disk. Apr 24 00:45:26.410699 containerd[1570]: time="2026-04-24T00:45:26.409957131Z" level=error msg="ExecSync for \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"d14edd5fc9cce0a897129e3d63f80bf175425075c2bd7011d35032e6f795108d\": OCI runtime exec failed: exec failed: unable to start container process: procReady not received" Apr 24 00:45:26.630380 containerd[1570]: time="2026-04-24T00:45:26.626635172Z" level=info msg="received container exit event container_id:\"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" id:\"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" pid:3972 exited_at:{seconds:1776991526 nanos:592063314}" Apr 24 00:45:26.830646 kubelet[2842]: E0424 00:45:26.724458 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"d14edd5fc9cce0a897129e3d63f80bf175425075c2bd7011d35032e6f795108d\": OCI runtime exec failed: exec failed: unable to start container process: procReady not received" containerID="79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e" cmd=["/bin/calico-node","-shutdown"] Apr 24 00:45:27.279366 containerd[1570]: time="2026-04-24T00:45:27.276648071Z" level=info msg="Kill container \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\"" Apr 24 00:45:27.334501 kubelet[2842]: E0424 00:45:26.915561 2842 kuberuntime_container.go:772] "PreStop hook failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"d14edd5fc9cce0a897129e3d63f80bf175425075c2bd7011d35032e6f795108d\": OCI runtime exec failed: exec failed: unable to start container process: procReady not received" pod="calico-system/calico-node-249n5" podUID="d1b23bf3-e49e-4c90-a9d7-196b32f35107" containerName="calico-node" containerID="containerd://79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e" Apr 24 00:45:27.392160 kubelet[2842]: E0424 00:45:27.391931 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.262s" Apr 24 00:45:28.103844 containerd[1570]: time="2026-04-24T00:45:28.103677227Z" level=error msg="ExecSync for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Apr 24 00:45:28.811805 kubelet[2842]: E0424 00:45:28.809707 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993" cmd=["/health","-live"] Apr 24 00:45:29.604329 systemd[1]: Started sshd@30-10.0.0.92:22-10.0.0.1:43494.service - OpenSSH per-connection server daemon (10.0.0.1:43494). Apr 24 00:45:31.778536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e-rootfs.mount: Deactivated successfully. Apr 24 00:45:32.013535 containerd[1570]: time="2026-04-24T00:45:32.010664557Z" level=warning msg="container event discarded" container=14052426ef7bd34e2436d7ffec352135fbca6618b6969de16f0a2923d573528d type=CONTAINER_STOPPED_EVENT Apr 24 00:45:32.150745 containerd[1570]: time="2026-04-24T00:45:32.150611766Z" level=error msg="ExecSync for \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"699772113e675da344891fa04661ef46310982f1dbe13defa8b8d8eb5745f4b9\": cannot exec in a deleted state" Apr 24 00:45:32.176153 kubelet[2842]: E0424 00:45:32.175240 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"699772113e675da344891fa04661ef46310982f1dbe13defa8b8d8eb5745f4b9\": cannot exec in a deleted state" containerID="79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 24 00:45:32.516262 sshd[6896]: Accepted publickey for core from 10.0.0.1 port 43494 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:45:32.659728 sshd-session[6896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:45:32.666192 containerd[1570]: time="2026-04-24T00:45:32.664483700Z" level=error msg="ExecSync for \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 24 00:45:32.770890 systemd-logind[1558]: New session 31 of user core. Apr 24 00:45:32.772174 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 24 00:45:33.305489 containerd[1570]: time="2026-04-24T00:45:33.302366162Z" level=info msg="StopContainer for \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" returns successfully" Apr 24 00:45:33.875924 kubelet[2842]: E0424 00:45:33.875024 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 24 00:45:34.186087 containerd[1570]: time="2026-04-24T00:45:34.170487008Z" level=error msg="ExecSync for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Apr 24 00:45:34.367344 containerd[1570]: time="2026-04-24T00:45:34.367162406Z" level=error msg="ExecSync for \"79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 24 00:45:34.378965 kubelet[2842]: E0424 00:45:34.376294 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="79483206fff5b0662fd7861a49e1ef90c9af55558dca666dd07be9864d378a6e" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 24 00:45:34.379454 kubelet[2842]: E0424 00:45:34.379136 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993" cmd=["/health","-ready"] Apr 24 00:45:36.951133 kubelet[2842]: E0424 00:45:36.945460 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.349s" Apr 24 00:45:40.224115 sshd[6902]: Connection closed by 10.0.0.1 port 43494 Apr 24 00:45:40.300064 sshd-session[6896]: pam_unix(sshd:session): session closed for user core Apr 24 00:45:40.332109 containerd[1570]: time="2026-04-24T00:45:40.331626297Z" level=info msg="StopContainer for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" with timeout 30 (s)" Apr 24 00:45:40.378561 containerd[1570]: time="2026-04-24T00:45:40.374368579Z" level=info msg="Stop container \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" with signal terminated" Apr 24 00:45:40.561580 systemd[1]: sshd@30-10.0.0.92:22-10.0.0.1:43494.service: Deactivated successfully. Apr 24 00:45:40.568783 systemd[1]: sshd@30-10.0.0.92:22-10.0.0.1:43494.service: Consumed 1.026s CPU time, 3.2M memory peak. Apr 24 00:45:40.791421 systemd[1]: session-31.scope: Deactivated successfully. Apr 24 00:45:40.816410 systemd[1]: session-31.scope: Consumed 5.224s CPU time, 15.9M memory peak. Apr 24 00:45:40.963362 systemd-logind[1558]: Session 31 logged out. Waiting for processes to exit. Apr 24 00:45:41.016122 systemd-logind[1558]: Removed session 31. Apr 24 00:45:41.275378 systemd[1]: cri-containerd-9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993.scope: Deactivated successfully. Apr 24 00:45:41.278370 systemd[1]: cri-containerd-9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993.scope: Consumed 8.347s CPU time, 19.1M memory peak, 4K read from disk. Apr 24 00:45:41.753896 containerd[1570]: time="2026-04-24T00:45:41.753435509Z" level=info msg="received container exit event container_id:\"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" id:\"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" pid:5669 exit_status:2 exited_at:{seconds:1776991541 nanos:673477392}" Apr 24 00:45:42.609019 containerd[1570]: time="2026-04-24T00:45:42.601576551Z" level=error msg="ExecSync for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"d04b843130c31ac6d42070013ef54ffa966d233e73cade920154ca4fc9420766\": OCI runtime exec failed: exec failed: unable to start container process: procReady not received" Apr 24 00:45:43.305723 containerd[1570]: time="2026-04-24T00:45:43.295648628Z" level=info msg="CreateContainer within sandbox \"a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Apr 24 00:45:43.373429 kubelet[2842]: E0424 00:45:43.362325 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"d04b843130c31ac6d42070013ef54ffa966d233e73cade920154ca4fc9420766\": OCI runtime exec failed: exec failed: unable to start container process: procReady not received" containerID="9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993" cmd=["/health","-ready"] Apr 24 00:45:45.992053 systemd[1]: Started sshd@31-10.0.0.92:22-10.0.0.1:49262.service - OpenSSH per-connection server daemon (10.0.0.1:49262). Apr 24 00:45:46.357248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2822457789.mount: Deactivated successfully. Apr 24 00:45:46.463930 containerd[1570]: time="2026-04-24T00:45:46.417760333Z" level=info msg="Container 893eeb074eb4277a4ddcb7bb05cf508423e04ca6267dcdf4fe37f31f6eb617e8: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:45:48.309002 kubelet[2842]: E0424 00:45:48.058007 2842 cadvisor_stats_provider.go:569] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0e7964a_87e1_45b9_8f9b_26c7b6d887fe.slice/cri-containerd-9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993.scope\": RecentStats: unable to find data in memory cache]" Apr 24 00:45:48.564090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993-rootfs.mount: Deactivated successfully. Apr 24 00:45:48.706161 sshd[6956]: Accepted publickey for core from 10.0.0.1 port 49262 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:45:48.821665 sshd-session[6956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:45:49.030432 containerd[1570]: time="2026-04-24T00:45:48.957848204Z" level=error msg="ExecSync for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"cb06a1fbec86e0c28de8092d2fdab0a610321717a563aaf83fa33b2e489f9ae0\": cannot exec in a deleted state" Apr 24 00:45:49.063963 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 24 00:45:49.098718 systemd-logind[1558]: New session 32 of user core. Apr 24 00:45:49.110104 containerd[1570]: time="2026-04-24T00:45:49.110069984Z" level=info msg="StopContainer for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" returns successfully" Apr 24 00:45:49.199624 kubelet[2842]: E0424 00:45:49.199447 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"cb06a1fbec86e0c28de8092d2fdab0a610321717a563aaf83fa33b2e489f9ae0\": cannot exec in a deleted state" containerID="9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993" cmd=["/health","-ready"] Apr 24 00:45:49.220528 containerd[1570]: time="2026-04-24T00:45:49.220398406Z" level=info msg="CreateContainer within sandbox \"a79ef8a20cf4b9d64d8f4a00591735eea4d661b0f355c073f3f40888a78decac\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"893eeb074eb4277a4ddcb7bb05cf508423e04ca6267dcdf4fe37f31f6eb617e8\"" Apr 24 00:45:49.531944 containerd[1570]: time="2026-04-24T00:45:49.513535907Z" level=error msg="ExecSync for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 24 00:45:49.531944 containerd[1570]: time="2026-04-24T00:45:49.518583532Z" level=info msg="StartContainer for \"893eeb074eb4277a4ddcb7bb05cf508423e04ca6267dcdf4fe37f31f6eb617e8\"" Apr 24 00:45:49.539069 containerd[1570]: time="2026-04-24T00:45:49.539014169Z" level=info msg="connecting to shim 893eeb074eb4277a4ddcb7bb05cf508423e04ca6267dcdf4fe37f31f6eb617e8" address="unix:///run/containerd/s/773931e115e40acf27e471bcb9a64cf410c12b7911761257b06c8ede2160ad5f" protocol=ttrpc version=3 Apr 24 00:45:49.816703 kubelet[2842]: E0424 00:45:49.801523 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993" cmd=["/health","-ready"] Apr 24 00:45:49.969942 containerd[1570]: time="2026-04-24T00:45:49.968036816Z" level=error msg="ExecSync for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 24 00:45:50.020819 systemd[1]: Started cri-containerd-893eeb074eb4277a4ddcb7bb05cf508423e04ca6267dcdf4fe37f31f6eb617e8.scope - libcontainer container 893eeb074eb4277a4ddcb7bb05cf508423e04ca6267dcdf4fe37f31f6eb617e8. Apr 24 00:45:50.097960 kubelet[2842]: E0424 00:45:50.090414 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.358s" Apr 24 00:45:50.172385 kubelet[2842]: E0424 00:45:50.171934 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993" cmd=["/health","-ready"] Apr 24 00:45:50.407239 containerd[1570]: time="2026-04-24T00:45:50.406083931Z" level=error msg="ExecSync for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 24 00:45:50.436113 sshd[6966]: Connection closed by 10.0.0.1 port 49262 Apr 24 00:45:50.438549 sshd-session[6956]: pam_unix(sshd:session): session closed for user core Apr 24 00:45:50.440683 containerd[1570]: time="2026-04-24T00:45:50.440369369Z" level=info msg="CreateContainer within sandbox \"8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2\" for container &ContainerMetadata{Name:goldmane,Attempt:1,}" Apr 24 00:45:50.456959 kubelet[2842]: E0424 00:45:50.446934 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993" cmd=["/health","-ready"] Apr 24 00:45:50.465496 systemd[1]: sshd@31-10.0.0.92:22-10.0.0.1:49262.service: Deactivated successfully. Apr 24 00:45:50.493624 systemd[1]: session-32.scope: Deactivated successfully. Apr 24 00:45:50.520349 systemd-logind[1558]: Session 32 logged out. Waiting for processes to exit. Apr 24 00:45:50.522291 systemd-logind[1558]: Removed session 32. Apr 24 00:45:50.664474 containerd[1570]: time="2026-04-24T00:45:50.663236892Z" level=error msg="ExecSync for \"9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 24 00:45:50.703477 kubelet[2842]: E0424 00:45:50.703175 2842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993" cmd=["/health","-ready"] Apr 24 00:45:50.705788 containerd[1570]: time="2026-04-24T00:45:50.705695122Z" level=info msg="Container 3f7d17535be71ac347820cdbe1310f7f76ca154bfa39bdedb1021103ad0320cb: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:45:50.705951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923433234.mount: Deactivated successfully. Apr 24 00:45:50.859957 kubelet[2842]: E0424 00:45:50.858766 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:45:51.102608 containerd[1570]: time="2026-04-24T00:45:51.100349942Z" level=info msg="CreateContainer within sandbox \"8de1c0301930a8668e16f8b646ed7a3a23b7a9958639d67da91031bd506f42c2\" for &ContainerMetadata{Name:goldmane,Attempt:1,} returns container id \"3f7d17535be71ac347820cdbe1310f7f76ca154bfa39bdedb1021103ad0320cb\"" Apr 24 00:45:51.145946 containerd[1570]: time="2026-04-24T00:45:51.145881186Z" level=info msg="StartContainer for \"3f7d17535be71ac347820cdbe1310f7f76ca154bfa39bdedb1021103ad0320cb\"" Apr 24 00:45:51.169240 containerd[1570]: time="2026-04-24T00:45:51.169207011Z" level=info msg="connecting to shim 3f7d17535be71ac347820cdbe1310f7f76ca154bfa39bdedb1021103ad0320cb" address="unix:///run/containerd/s/116cc232891da72d0df252d7a65cd2764b2f216cde16229cab029f9f44e2e21e" protocol=ttrpc version=3 Apr 24 00:45:51.957335 systemd[1]: Started cri-containerd-3f7d17535be71ac347820cdbe1310f7f76ca154bfa39bdedb1021103ad0320cb.scope - libcontainer container 3f7d17535be71ac347820cdbe1310f7f76ca154bfa39bdedb1021103ad0320cb. Apr 24 00:45:52.706931 containerd[1570]: time="2026-04-24T00:45:52.706655978Z" level=error msg="get state for 893eeb074eb4277a4ddcb7bb05cf508423e04ca6267dcdf4fe37f31f6eb617e8" error="context deadline exceeded" Apr 24 00:45:52.719467 containerd[1570]: time="2026-04-24T00:45:52.707919594Z" level=warning msg="unknown status" status=0 Apr 24 00:45:53.042646 containerd[1570]: time="2026-04-24T00:45:53.014710879Z" level=warning msg="container event discarded" container=afd8a7336733204c73b5384b9963514a50464a96f1e4a32de182f1be578cbacd type=CONTAINER_STOPPED_EVENT Apr 24 00:45:53.323423 containerd[1570]: time="2026-04-24T00:45:53.320454353Z" level=warning msg="container event discarded" container=8ab5c2c21bd65b272afb5021b290cfae1d49d9412aad943194b5b0ef44cdfc17 type=CONTAINER_CREATED_EVENT Apr 24 00:45:54.118944 containerd[1570]: time="2026-04-24T00:45:54.118412366Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 24 00:45:54.861002 containerd[1570]: time="2026-04-24T00:45:54.857832993Z" level=info msg="StartContainer for \"893eeb074eb4277a4ddcb7bb05cf508423e04ca6267dcdf4fe37f31f6eb617e8\" returns successfully" Apr 24 00:45:55.663362 systemd[1]: Started sshd@32-10.0.0.92:22-10.0.0.1:52048.service - OpenSSH per-connection server daemon (10.0.0.1:52048). Apr 24 00:45:55.909790 containerd[1570]: time="2026-04-24T00:45:55.909716379Z" level=info msg="StartContainer for \"3f7d17535be71ac347820cdbe1310f7f76ca154bfa39bdedb1021103ad0320cb\" returns successfully" Apr 24 00:45:56.446072 sshd[7091]: Accepted publickey for core from 10.0.0.1 port 52048 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:45:56.452897 sshd-session[7091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:45:56.488548 systemd-logind[1558]: New session 33 of user core. Apr 24 00:45:56.496291 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 24 00:45:57.461266 sshd[7118]: Connection closed by 10.0.0.1 port 52048 Apr 24 00:45:57.468780 sshd-session[7091]: pam_unix(sshd:session): session closed for user core Apr 24 00:45:57.635037 systemd[1]: sshd@32-10.0.0.92:22-10.0.0.1:52048.service: Deactivated successfully. Apr 24 00:45:57.689477 systemd[1]: session-33.scope: Deactivated successfully. Apr 24 00:45:57.815157 systemd-logind[1558]: Session 33 logged out. Waiting for processes to exit. Apr 24 00:45:57.844951 systemd-logind[1558]: Removed session 33. Apr 24 00:45:58.939103 kubelet[2842]: I0424 00:45:58.938032 2842 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 00:45:59.188278 containerd[1570]: time="2026-04-24T00:45:59.186415616Z" level=warning msg="container event discarded" container=898b2fc2ab71b4e44911424d8679e70343cbd53bdee14866f5810ff9b9011671 type=CONTAINER_CREATED_EVENT Apr 24 00:46:00.131500 kubelet[2842]: E0424 00:46:00.130990 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:46:02.142477 kubelet[2842]: E0424 00:46:02.142112 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:46:02.545771 systemd[1]: Started sshd@33-10.0.0.92:22-10.0.0.1:52058.service - OpenSSH per-connection server daemon (10.0.0.1:52058). Apr 24 00:46:02.760818 sshd[7223]: Accepted publickey for core from 10.0.0.1 port 52058 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:46:02.769742 sshd-session[7223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:46:02.831449 systemd-logind[1558]: New session 34 of user core. Apr 24 00:46:02.902441 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 24 00:46:03.000492 containerd[1570]: time="2026-04-24T00:46:02.999029974Z" level=warning msg="container event discarded" container=1e1ef1d7c1adeaa4c1f4e99b102ff658be05f1b0b5e24e046dfec134d51a305b type=CONTAINER_CREATED_EVENT Apr 24 00:46:03.688197 sshd[7228]: Connection closed by 10.0.0.1 port 52058 Apr 24 00:46:03.681960 sshd-session[7223]: pam_unix(sshd:session): session closed for user core Apr 24 00:46:03.723031 systemd[1]: sshd@33-10.0.0.92:22-10.0.0.1:52058.service: Deactivated successfully. Apr 24 00:46:03.730699 systemd[1]: session-34.scope: Deactivated successfully. Apr 24 00:46:03.742398 systemd-logind[1558]: Session 34 logged out. Waiting for processes to exit. Apr 24 00:46:03.751427 systemd[1]: Started sshd@34-10.0.0.92:22-10.0.0.1:52066.service - OpenSSH per-connection server daemon (10.0.0.1:52066). Apr 24 00:46:03.759486 systemd-logind[1558]: Removed session 34. Apr 24 00:46:03.985185 sshd[7247]: Accepted publickey for core from 10.0.0.1 port 52066 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:46:04.003784 sshd-session[7247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:46:04.120274 systemd-logind[1558]: New session 35 of user core. Apr 24 00:46:04.127742 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 24 00:46:05.654022 sshd[7255]: Connection closed by 10.0.0.1 port 52066 Apr 24 00:46:05.656568 sshd-session[7247]: pam_unix(sshd:session): session closed for user core Apr 24 00:46:05.712413 systemd[1]: Started sshd@35-10.0.0.92:22-10.0.0.1:51858.service - OpenSSH per-connection server daemon (10.0.0.1:51858). Apr 24 00:46:05.713354 systemd[1]: sshd@34-10.0.0.92:22-10.0.0.1:52066.service: Deactivated successfully. Apr 24 00:46:05.806651 systemd[1]: session-35.scope: Deactivated successfully. Apr 24 00:46:05.840294 systemd-logind[1558]: Session 35 logged out. Waiting for processes to exit. Apr 24 00:46:05.845146 systemd-logind[1558]: Removed session 35. Apr 24 00:46:06.214420 sshd[7370]: Accepted publickey for core from 10.0.0.1 port 51858 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:46:06.213555 sshd-session[7370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:46:06.262347 systemd-logind[1558]: New session 36 of user core. Apr 24 00:46:06.278060 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 24 00:46:07.178502 kubelet[2842]: E0424 00:46:07.178358 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:46:13.078389 sshd[7379]: Connection closed by 10.0.0.1 port 51858 Apr 24 00:46:13.091627 sshd-session[7370]: pam_unix(sshd:session): session closed for user core Apr 24 00:46:13.187466 systemd[1]: Started sshd@36-10.0.0.92:22-10.0.0.1:51868.service - OpenSSH per-connection server daemon (10.0.0.1:51868). Apr 24 00:46:13.214925 systemd[1]: sshd@35-10.0.0.92:22-10.0.0.1:51858.service: Deactivated successfully. Apr 24 00:46:13.221343 systemd[1]: session-36.scope: Deactivated successfully. Apr 24 00:46:13.227029 kubelet[2842]: E0424 00:46:13.223641 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:46:13.226768 systemd[1]: session-36.scope: Consumed 4.359s CPU time, 47.9M memory peak. Apr 24 00:46:13.231729 systemd-logind[1558]: Session 36 logged out. Waiting for processes to exit. Apr 24 00:46:13.253727 systemd-logind[1558]: Removed session 36. Apr 24 00:46:13.482235 sshd[7472]: Accepted publickey for core from 10.0.0.1 port 51868 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:46:13.499427 sshd-session[7472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:46:13.517287 systemd-logind[1558]: New session 37 of user core. Apr 24 00:46:13.528189 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 24 00:46:14.120829 containerd[1570]: time="2026-04-24T00:46:14.120528909Z" level=warning msg="container event discarded" container=898b2fc2ab71b4e44911424d8679e70343cbd53bdee14866f5810ff9b9011671 type=CONTAINER_STARTED_EVENT Apr 24 00:46:16.153409 sshd[7499]: Connection closed by 10.0.0.1 port 51868 Apr 24 00:46:16.163335 sshd-session[7472]: pam_unix(sshd:session): session closed for user core Apr 24 00:46:16.206390 systemd[1]: Started sshd@37-10.0.0.92:22-10.0.0.1:52316.service - OpenSSH per-connection server daemon (10.0.0.1:52316). Apr 24 00:46:16.370846 systemd[1]: sshd@36-10.0.0.92:22-10.0.0.1:51868.service: Deactivated successfully. Apr 24 00:46:16.524174 systemd[1]: session-37.scope: Deactivated successfully. Apr 24 00:46:16.524801 systemd[1]: session-37.scope: Consumed 1.795s CPU time, 61.8M memory peak. Apr 24 00:46:16.541567 systemd-logind[1558]: Session 37 logged out. Waiting for processes to exit. Apr 24 00:46:16.543706 systemd-logind[1558]: Removed session 37. Apr 24 00:46:16.785504 sshd[7532]: Accepted publickey for core from 10.0.0.1 port 52316 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:46:16.791824 sshd-session[7532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:46:17.136361 systemd-logind[1558]: New session 38 of user core. Apr 24 00:46:17.141932 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 24 00:46:18.128170 sshd[7559]: Connection closed by 10.0.0.1 port 52316 Apr 24 00:46:18.129211 sshd-session[7532]: pam_unix(sshd:session): session closed for user core Apr 24 00:46:18.220710 systemd[1]: sshd@37-10.0.0.92:22-10.0.0.1:52316.service: Deactivated successfully. Apr 24 00:46:18.253939 systemd[1]: session-38.scope: Deactivated successfully. Apr 24 00:46:18.272175 systemd-logind[1558]: Session 38 logged out. Waiting for processes to exit. Apr 24 00:46:18.288760 systemd-logind[1558]: Removed session 38. Apr 24 00:46:18.379593 containerd[1570]: time="2026-04-24T00:46:18.328242573Z" level=warning msg="container event discarded" container=8ab5c2c21bd65b272afb5021b290cfae1d49d9412aad943194b5b0ef44cdfc17 type=CONTAINER_STARTED_EVENT Apr 24 00:46:18.614322 containerd[1570]: time="2026-04-24T00:46:18.613657234Z" level=warning msg="container event discarded" container=db2660cce8fa7e9fd74a8a7fe0363a1a7b43cd304461fd89f6769de958ab647f type=CONTAINER_CREATED_EVENT Apr 24 00:46:19.197231 containerd[1570]: time="2026-04-24T00:46:19.195144674Z" level=warning msg="container event discarded" container=1e1ef1d7c1adeaa4c1f4e99b102ff658be05f1b0b5e24e046dfec134d51a305b type=CONTAINER_STARTED_EVENT Apr 24 00:46:19.414420 containerd[1570]: time="2026-04-24T00:46:19.413586242Z" level=warning msg="container event discarded" container=db2660cce8fa7e9fd74a8a7fe0363a1a7b43cd304461fd89f6769de958ab647f type=CONTAINER_STARTED_EVENT Apr 24 00:46:21.086641 containerd[1570]: time="2026-04-24T00:46:21.086241480Z" level=warning msg="container event discarded" container=330365919c62e511110acaa5315c185eee5386a8217b5965f0b328a3abdbdcc6 type=CONTAINER_CREATED_EVENT Apr 24 00:46:21.704390 containerd[1570]: time="2026-04-24T00:46:21.703903917Z" level=warning msg="container event discarded" container=330365919c62e511110acaa5315c185eee5386a8217b5965f0b328a3abdbdcc6 type=CONTAINER_STARTED_EVENT Apr 24 00:46:23.283920 systemd[1]: Started sshd@38-10.0.0.92:22-10.0.0.1:52322.service - OpenSSH per-connection server daemon (10.0.0.1:52322). Apr 24 00:46:23.654974 sshd[7580]: Accepted publickey for core from 10.0.0.1 port 52322 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:46:23.659687 sshd-session[7580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:46:23.743166 systemd-logind[1558]: New session 39 of user core. Apr 24 00:46:23.777788 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 24 00:46:24.482139 sshd[7586]: Connection closed by 10.0.0.1 port 52322 Apr 24 00:46:24.482375 sshd-session[7580]: pam_unix(sshd:session): session closed for user core Apr 24 00:46:24.499148 systemd[1]: sshd@38-10.0.0.92:22-10.0.0.1:52322.service: Deactivated successfully. Apr 24 00:46:24.501929 systemd[1]: session-39.scope: Deactivated successfully. Apr 24 00:46:24.503485 systemd-logind[1558]: Session 39 logged out. Waiting for processes to exit. Apr 24 00:46:24.514450 systemd-logind[1558]: Removed session 39. Apr 24 00:46:26.397921 containerd[1570]: time="2026-04-24T00:46:26.395598741Z" level=warning msg="container event discarded" container=2f2c86439c798d2e231edd4758681a02d72424710b4e10e16d179a362add54cd type=CONTAINER_CREATED_EVENT Apr 24 00:46:26.629976 containerd[1570]: time="2026-04-24T00:46:26.629476484Z" level=warning msg="container event discarded" container=2f2c86439c798d2e231edd4758681a02d72424710b4e10e16d179a362add54cd type=CONTAINER_STARTED_EVENT Apr 24 00:46:29.532692 systemd[1]: Started sshd@39-10.0.0.92:22-10.0.0.1:38638.service - OpenSSH per-connection server daemon (10.0.0.1:38638). Apr 24 00:46:29.627613 containerd[1570]: time="2026-04-24T00:46:29.617467791Z" level=warning msg="container event discarded" container=9963862dfa28a126bbcd6a0471690adedad45356154537ce05ad8e55089bc26f type=CONTAINER_CREATED_EVENT Apr 24 00:46:30.023443 containerd[1570]: time="2026-04-24T00:46:30.023016601Z" level=warning msg="container event discarded" container=9963862dfa28a126bbcd6a0471690adedad45356154537ce05ad8e55089bc26f type=CONTAINER_STARTED_EVENT Apr 24 00:46:30.139708 sshd[7609]: Accepted publickey for core from 10.0.0.1 port 38638 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:46:30.194371 sshd-session[7609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:46:30.236660 systemd-logind[1558]: New session 40 of user core. Apr 24 00:46:30.247372 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 24 00:46:31.790217 sshd[7612]: Connection closed by 10.0.0.1 port 38638 Apr 24 00:46:31.791081 sshd-session[7609]: pam_unix(sshd:session): session closed for user core Apr 24 00:46:31.806591 systemd[1]: sshd@39-10.0.0.92:22-10.0.0.1:38638.service: Deactivated successfully. Apr 24 00:46:31.812694 systemd[1]: session-40.scope: Deactivated successfully. Apr 24 00:46:31.814795 systemd[1]: session-40.scope: Consumed 1.143s CPU time, 15.9M memory peak. Apr 24 00:46:31.829235 systemd-logind[1558]: Session 40 logged out. Waiting for processes to exit. Apr 24 00:46:31.858378 systemd-logind[1558]: Removed session 40. Apr 24 00:46:34.301907 containerd[1570]: time="2026-04-24T00:46:34.298727432Z" level=warning msg="container event discarded" container=9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993 type=CONTAINER_CREATED_EVENT Apr 24 00:46:34.506126 containerd[1570]: time="2026-04-24T00:46:34.500825066Z" level=warning msg="container event discarded" container=9704381b909ba36f22edffe0f7cee26472048513fb9464c96d0f4ec22e120993 type=CONTAINER_STARTED_EVENT Apr 24 00:46:37.078049 systemd[1]: Started sshd@40-10.0.0.92:22-10.0.0.1:35628.service - OpenSSH per-connection server daemon (10.0.0.1:35628). Apr 24 00:46:37.250133 sshd[7680]: Accepted publickey for core from 10.0.0.1 port 35628 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:46:37.303688 sshd-session[7680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:46:37.434119 systemd-logind[1558]: New session 41 of user core. Apr 24 00:46:37.443701 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 24 00:46:37.942103 sshd[7683]: Connection closed by 10.0.0.1 port 35628 Apr 24 00:46:37.939444 sshd-session[7680]: pam_unix(sshd:session): session closed for user core Apr 24 00:46:37.955906 systemd[1]: sshd@40-10.0.0.92:22-10.0.0.1:35628.service: Deactivated successfully. Apr 24 00:46:37.964503 systemd[1]: session-41.scope: Deactivated successfully. Apr 24 00:46:37.976727 systemd-logind[1558]: Session 41 logged out. Waiting for processes to exit. Apr 24 00:46:37.978096 systemd-logind[1558]: Removed session 41. Apr 24 00:46:39.155788 kubelet[2842]: E0424 00:46:39.140502 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:46:41.152565 kubelet[2842]: E0424 00:46:41.152435 2842 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:46:43.009037 systemd[1]: Started sshd@41-10.0.0.92:22-10.0.0.1:35638.service - OpenSSH per-connection server daemon (10.0.0.1:35638). Apr 24 00:46:43.116915 sshd[7700]: Accepted publickey for core from 10.0.0.1 port 35638 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:46:43.119485 sshd-session[7700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:46:43.147764 systemd-logind[1558]: New session 42 of user core. Apr 24 00:46:43.163903 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 24 00:46:43.519338 sshd[7703]: Connection closed by 10.0.0.1 port 35638 Apr 24 00:46:43.521150 sshd-session[7700]: pam_unix(sshd:session): session closed for user core Apr 24 00:46:43.530363 systemd-logind[1558]: Session 42 logged out. Waiting for processes to exit. Apr 24 00:46:43.530657 systemd[1]: sshd@41-10.0.0.92:22-10.0.0.1:35638.service: Deactivated successfully. Apr 24 00:46:43.533011 systemd[1]: session-42.scope: Deactivated successfully. Apr 24 00:46:43.561086 systemd-logind[1558]: Removed session 42. Apr 24 00:46:48.613524 systemd[1]: Started sshd@42-10.0.0.92:22-10.0.0.1:44292.service - OpenSSH per-connection server daemon (10.0.0.1:44292). Apr 24 00:46:48.967304 sshd[7762]: Accepted publickey for core from 10.0.0.1 port 44292 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:46:48.971456 sshd-session[7762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:46:49.125909 systemd-logind[1558]: New session 43 of user core. Apr 24 00:46:49.129508 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 24 00:46:51.576273 sshd[7766]: Connection closed by 10.0.0.1 port 44292 Apr 24 00:46:51.576834 sshd-session[7762]: pam_unix(sshd:session): session closed for user core Apr 24 00:46:51.697773 systemd[1]: sshd@42-10.0.0.92:22-10.0.0.1:44292.service: Deactivated successfully. Apr 24 00:46:51.727075 systemd[1]: session-43.scope: Deactivated successfully. Apr 24 00:46:51.730805 systemd[1]: session-43.scope: Consumed 1.654s CPU time, 15.8M memory peak. Apr 24 00:46:51.749319 systemd-logind[1558]: Session 43 logged out. Waiting for processes to exit. Apr 24 00:46:51.793517 systemd-logind[1558]: Removed session 43. Apr 24 00:46:56.586174 kubelet[2842]: E0424 00:46:56.576479 2842 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.409s" Apr 24 00:46:56.733388 systemd[1]: Started sshd@43-10.0.0.92:22-10.0.0.1:46678.service - OpenSSH per-connection server daemon (10.0.0.1:46678). Apr 24 00:46:57.829161 sshd[7816]: Accepted publickey for core from 10.0.0.1 port 46678 ssh2: RSA SHA256:Bstpeg3yJTAnHDjwqmyCH/jbWSgAymTPI24PW5aVUSo Apr 24 00:46:57.834093 sshd-session[7816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:46:57.937572 systemd-logind[1558]: New session 44 of user core. Apr 24 00:46:57.945771 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 24 00:46:58.950969 sshd[7819]: Connection closed by 10.0.0.1 port 46678 Apr 24 00:46:58.956840 sshd-session[7816]: pam_unix(sshd:session): session closed for user core Apr 24 00:46:58.978433 systemd[1]: sshd@43-10.0.0.92:22-10.0.0.1:46678.service: Deactivated successfully. Apr 24 00:46:58.986581 systemd[1]: session-44.scope: Deactivated successfully. Apr 24 00:46:59.002532 systemd-logind[1558]: Session 44 logged out. Waiting for processes to exit. Apr 24 00:46:59.010252 systemd-logind[1558]: Removed session 44.