Nov 6 05:25:44.480916 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Thu Nov 6 03:32:51 -00 2025 Nov 6 05:25:44.480953 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=42c7eeb79a8ee89597bba4204806137326be9acdbca65a8fd923766f65b62f69 Nov 6 05:25:44.480965 kernel: BIOS-provided physical RAM map: Nov 6 05:25:44.480978 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 6 05:25:44.480987 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 6 05:25:44.480996 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 6 05:25:44.481007 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 6 05:25:44.481016 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 6 05:25:44.481029 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 6 05:25:44.481038 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 6 05:25:44.481047 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Nov 6 05:25:44.481057 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 6 05:25:44.481069 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 6 05:25:44.481078 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 6 05:25:44.481089 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 6 05:25:44.481099 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 6 05:25:44.481113 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 6 05:25:44.481125 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 6 05:25:44.481145 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 6 05:25:44.481155 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 6 05:25:44.481165 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 6 05:25:44.481175 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 6 05:25:44.481184 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 6 05:25:44.481194 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 05:25:44.481204 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 6 05:25:44.481214 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 6 05:25:44.481224 kernel: NX (Execute Disable) protection: active Nov 6 05:25:44.481234 kernel: APIC: Static calls initialized Nov 6 05:25:44.481243 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Nov 6 05:25:44.481257 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Nov 6 05:25:44.481266 kernel: extended physical RAM map: Nov 6 05:25:44.481276 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 6 05:25:44.481286 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 6 05:25:44.481296 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 6 05:25:44.481306 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Nov 6 05:25:44.481316 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 6 05:25:44.481326 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 6 05:25:44.481336 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 6 05:25:44.481346 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Nov 6 05:25:44.481356 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Nov 6 05:25:44.481373 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Nov 6 05:25:44.481383 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Nov 6 05:25:44.481394 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Nov 6 05:25:44.481404 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 6 05:25:44.481417 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 6 05:25:44.481427 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 6 05:25:44.481437 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 6 05:25:44.481448 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 6 05:25:44.481458 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 6 05:25:44.481469 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 6 05:25:44.481479 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 6 05:25:44.481489 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 6 05:25:44.481499 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 6 05:25:44.481510 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 6 05:25:44.481533 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 6 05:25:44.481548 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 05:25:44.481558 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 6 05:25:44.481568 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 6 05:25:44.481582 kernel: efi: EFI v2.7 by EDK II Nov 6 05:25:44.481593 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Nov 6 05:25:44.481603 kernel: random: crng init done Nov 6 05:25:44.481617 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Nov 6 05:25:44.481627 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Nov 6 05:25:44.481640 kernel: secureboot: Secure boot disabled Nov 6 05:25:44.481650 kernel: SMBIOS 2.8 present. Nov 6 05:25:44.481661 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 6 05:25:44.481671 kernel: DMI: Memory slots populated: 1/1 Nov 6 05:25:44.481685 kernel: Hypervisor detected: KVM Nov 6 05:25:44.481696 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 6 05:25:44.481706 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 05:25:44.481716 kernel: kvm-clock: using sched offset of 5622691379 cycles Nov 6 05:25:44.481727 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 05:25:44.481738 kernel: tsc: Detected 2794.748 MHz processor Nov 6 05:25:44.481749 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 05:25:44.481759 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 05:25:44.481770 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 6 05:25:44.481780 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 6 05:25:44.481794 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 05:25:44.481805 kernel: Using GB pages for direct mapping Nov 6 05:25:44.481815 kernel: ACPI: Early table checksum verification disabled Nov 6 05:25:44.481826 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 6 05:25:44.481837 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 6 05:25:44.481847 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 05:25:44.481858 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 05:25:44.481869 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 6 05:25:44.481880 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 05:25:44.481893 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 05:25:44.481904 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 05:25:44.481914 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 05:25:44.481925 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 6 05:25:44.481936 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 6 05:25:44.481946 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 6 05:25:44.481957 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 6 05:25:44.481968 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 6 05:25:44.481978 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 6 05:25:44.481991 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 6 05:25:44.482002 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 6 05:25:44.482012 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 6 05:25:44.482023 kernel: No NUMA configuration found Nov 6 05:25:44.482033 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Nov 6 05:25:44.482044 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Nov 6 05:25:44.482055 kernel: Zone ranges: Nov 6 05:25:44.482066 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 05:25:44.482076 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Nov 6 05:25:44.482089 kernel: Normal empty Nov 6 05:25:44.482100 kernel: Device empty Nov 6 05:25:44.482110 kernel: Movable zone start for each node Nov 6 05:25:44.482121 kernel: Early memory node ranges Nov 6 05:25:44.482141 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 6 05:25:44.482155 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 6 05:25:44.482166 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 6 05:25:44.482177 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Nov 6 05:25:44.482187 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Nov 6 05:25:44.482198 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Nov 6 05:25:44.482211 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Nov 6 05:25:44.482222 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Nov 6 05:25:44.482235 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Nov 6 05:25:44.482246 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 05:25:44.482266 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 6 05:25:44.482280 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 6 05:25:44.482291 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 05:25:44.482302 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Nov 6 05:25:44.482315 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Nov 6 05:25:44.482327 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 6 05:25:44.482340 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 6 05:25:44.482351 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Nov 6 05:25:44.482365 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 05:25:44.482375 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 05:25:44.482384 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 05:25:44.482393 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 05:25:44.482402 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 05:25:44.482413 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 05:25:44.482422 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 05:25:44.482432 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 05:25:44.482441 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 05:25:44.482450 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 05:25:44.482458 kernel: TSC deadline timer available Nov 6 05:25:44.482468 kernel: CPU topo: Max. logical packages: 1 Nov 6 05:25:44.482477 kernel: CPU topo: Max. logical dies: 1 Nov 6 05:25:44.482485 kernel: CPU topo: Max. dies per package: 1 Nov 6 05:25:44.482497 kernel: CPU topo: Max. threads per core: 1 Nov 6 05:25:44.482506 kernel: CPU topo: Num. cores per package: 4 Nov 6 05:25:44.482536 kernel: CPU topo: Num. threads per package: 4 Nov 6 05:25:44.482547 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 6 05:25:44.482568 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 05:25:44.482598 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 6 05:25:44.482611 kernel: kvm-guest: setup PV sched yield Nov 6 05:25:44.482622 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 6 05:25:44.482633 kernel: Booting paravirtualized kernel on KVM Nov 6 05:25:44.482650 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 05:25:44.482661 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 6 05:25:44.482672 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 6 05:25:44.482684 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 6 05:25:44.482694 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 6 05:25:44.482703 kernel: kvm-guest: PV spinlocks enabled Nov 6 05:25:44.482713 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 05:25:44.482727 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=42c7eeb79a8ee89597bba4204806137326be9acdbca65a8fd923766f65b62f69 Nov 6 05:25:44.482740 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 05:25:44.482750 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 05:25:44.482759 kernel: Fallback order for Node 0: 0 Nov 6 05:25:44.482769 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Nov 6 05:25:44.482780 kernel: Policy zone: DMA32 Nov 6 05:25:44.482791 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 05:25:44.482803 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 6 05:25:44.482814 kernel: ftrace: allocating 40092 entries in 157 pages Nov 6 05:25:44.482825 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 05:25:44.482840 kernel: Dynamic Preempt: voluntary Nov 6 05:25:44.482851 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 05:25:44.482863 kernel: rcu: RCU event tracing is enabled. Nov 6 05:25:44.482874 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 6 05:25:44.482886 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 05:25:44.482897 kernel: Rude variant of Tasks RCU enabled. Nov 6 05:25:44.482908 kernel: Tracing variant of Tasks RCU enabled. Nov 6 05:25:44.482920 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 05:25:44.482931 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 6 05:25:44.482948 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 05:25:44.482960 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 05:25:44.482971 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 05:25:44.482982 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 6 05:25:44.482993 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 05:25:44.483004 kernel: Console: colour dummy device 80x25 Nov 6 05:25:44.483015 kernel: printk: legacy console [ttyS0] enabled Nov 6 05:25:44.483026 kernel: ACPI: Core revision 20240827 Nov 6 05:25:44.483037 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 05:25:44.483052 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 05:25:44.483063 kernel: x2apic enabled Nov 6 05:25:44.483074 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 05:25:44.483085 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 6 05:25:44.483096 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 6 05:25:44.483107 kernel: kvm-guest: setup PV IPIs Nov 6 05:25:44.483118 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 05:25:44.483129 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 6 05:25:44.483150 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 6 05:25:44.483164 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 05:25:44.483175 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 6 05:25:44.483186 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 6 05:25:44.483198 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 05:25:44.483209 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 05:25:44.483220 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 05:25:44.483231 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 6 05:25:44.483242 kernel: active return thunk: retbleed_return_thunk Nov 6 05:25:44.483253 kernel: RETBleed: Mitigation: untrained return thunk Nov 6 05:25:44.483271 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 05:25:44.483282 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 05:25:44.483293 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 6 05:25:44.483305 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 6 05:25:44.483319 kernel: active return thunk: srso_return_thunk Nov 6 05:25:44.483331 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 6 05:25:44.483344 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 05:25:44.483355 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 05:25:44.483369 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 05:25:44.483380 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 05:25:44.483391 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 6 05:25:44.483402 kernel: Freeing SMP alternatives memory: 32K Nov 6 05:25:44.483414 kernel: pid_max: default: 32768 minimum: 301 Nov 6 05:25:44.483425 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 05:25:44.483436 kernel: landlock: Up and running. Nov 6 05:25:44.483446 kernel: SELinux: Initializing. Nov 6 05:25:44.483458 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 05:25:44.483472 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 05:25:44.483483 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 6 05:25:44.483494 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 6 05:25:44.483505 kernel: ... version: 0 Nov 6 05:25:44.483531 kernel: ... bit width: 48 Nov 6 05:25:44.483542 kernel: ... generic registers: 6 Nov 6 05:25:44.483553 kernel: ... value mask: 0000ffffffffffff Nov 6 05:25:44.483564 kernel: ... max period: 00007fffffffffff Nov 6 05:25:44.483575 kernel: ... fixed-purpose events: 0 Nov 6 05:25:44.483590 kernel: ... event mask: 000000000000003f Nov 6 05:25:44.483601 kernel: signal: max sigframe size: 1776 Nov 6 05:25:44.483612 kernel: rcu: Hierarchical SRCU implementation. Nov 6 05:25:44.483624 kernel: rcu: Max phase no-delay instances is 400. Nov 6 05:25:44.483639 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 05:25:44.483650 kernel: smp: Bringing up secondary CPUs ... Nov 6 05:25:44.483661 kernel: smpboot: x86: Booting SMP configuration: Nov 6 05:25:44.483672 kernel: .... node #0, CPUs: #1 #2 #3 Nov 6 05:25:44.483683 kernel: smp: Brought up 1 node, 4 CPUs Nov 6 05:25:44.483694 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 6 05:25:44.483709 kernel: Memory: 2441100K/2565800K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15356K init, 2688K bss, 118764K reserved, 0K cma-reserved) Nov 6 05:25:44.483720 kernel: devtmpfs: initialized Nov 6 05:25:44.483730 kernel: x86/mm: Memory block size: 128MB Nov 6 05:25:44.483741 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 6 05:25:44.483752 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 6 05:25:44.483763 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Nov 6 05:25:44.483775 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 6 05:25:44.483786 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Nov 6 05:25:44.483800 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 6 05:25:44.483811 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 05:25:44.483823 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 6 05:25:44.483834 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 05:25:44.483845 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 05:25:44.483856 kernel: audit: initializing netlink subsys (disabled) Nov 6 05:25:44.483867 kernel: audit: type=2000 audit(1762406740.409:1): state=initialized audit_enabled=0 res=1 Nov 6 05:25:44.483879 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 05:25:44.483890 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 05:25:44.483904 kernel: cpuidle: using governor menu Nov 6 05:25:44.483916 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 05:25:44.483927 kernel: dca service started, version 1.12.1 Nov 6 05:25:44.483938 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 6 05:25:44.483949 kernel: PCI: Using configuration type 1 for base access Nov 6 05:25:44.483960 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 05:25:44.483971 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 05:25:44.483982 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 05:25:44.483993 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 05:25:44.484008 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 05:25:44.484019 kernel: ACPI: Added _OSI(Module Device) Nov 6 05:25:44.484030 kernel: ACPI: Added _OSI(Processor Device) Nov 6 05:25:44.484041 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 05:25:44.484052 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 05:25:44.484063 kernel: ACPI: Interpreter enabled Nov 6 05:25:44.484074 kernel: ACPI: PM: (supports S0 S3 S5) Nov 6 05:25:44.484085 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 05:25:44.484096 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 05:25:44.484115 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 05:25:44.484126 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 6 05:25:44.484147 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 05:25:44.484437 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 05:25:44.484654 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 6 05:25:44.484821 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 6 05:25:44.484838 kernel: PCI host bridge to bus 0000:00 Nov 6 05:25:44.485026 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 05:25:44.485193 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 05:25:44.485358 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 05:25:44.485527 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 6 05:25:44.485686 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 6 05:25:44.485837 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 6 05:25:44.485996 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 05:25:44.486210 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 6 05:25:44.486401 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 6 05:25:44.486592 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 6 05:25:44.486760 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 6 05:25:44.486925 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 6 05:25:44.487089 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 05:25:44.487296 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 6 05:25:44.487470 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 6 05:25:44.487661 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 6 05:25:44.487827 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 6 05:25:44.488003 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 6 05:25:44.488198 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 6 05:25:44.488382 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 6 05:25:44.488575 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 6 05:25:44.488753 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 05:25:44.488923 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 6 05:25:44.489090 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 6 05:25:44.489265 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 6 05:25:44.489430 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 6 05:25:44.489612 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 6 05:25:44.489763 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 6 05:25:44.489922 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 6 05:25:44.490061 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 6 05:25:44.490240 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 6 05:25:44.490406 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 6 05:25:44.490576 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 6 05:25:44.490596 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 05:25:44.490607 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 05:25:44.490617 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 05:25:44.490627 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 05:25:44.490637 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 6 05:25:44.490647 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 6 05:25:44.490658 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 6 05:25:44.490667 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 6 05:25:44.490676 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 6 05:25:44.490688 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 6 05:25:44.490698 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 6 05:25:44.490708 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 6 05:25:44.490717 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 6 05:25:44.490726 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 6 05:25:44.490736 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 6 05:25:44.490745 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 6 05:25:44.490756 kernel: iommu: Default domain type: Translated Nov 6 05:25:44.490767 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 05:25:44.490781 kernel: efivars: Registered efivars operations Nov 6 05:25:44.490793 kernel: PCI: Using ACPI for IRQ routing Nov 6 05:25:44.490804 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 05:25:44.490815 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 6 05:25:44.490826 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Nov 6 05:25:44.490837 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Nov 6 05:25:44.490848 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Nov 6 05:25:44.490859 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Nov 6 05:25:44.490870 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Nov 6 05:25:44.490885 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Nov 6 05:25:44.490896 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Nov 6 05:25:44.491063 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 6 05:25:44.491244 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 6 05:25:44.491411 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 05:25:44.491428 kernel: vgaarb: loaded Nov 6 05:25:44.491440 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 05:25:44.491452 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 05:25:44.491469 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 05:25:44.491480 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 05:25:44.491492 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 05:25:44.491504 kernel: pnp: PnP ACPI init Nov 6 05:25:44.491711 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 6 05:25:44.491733 kernel: pnp: PnP ACPI: found 6 devices Nov 6 05:25:44.491746 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 05:25:44.491757 kernel: NET: Registered PF_INET protocol family Nov 6 05:25:44.491769 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 05:25:44.491784 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 6 05:25:44.491796 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 05:25:44.491808 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 05:25:44.491820 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 6 05:25:44.491832 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 6 05:25:44.491844 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 05:25:44.491856 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 05:25:44.491868 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 05:25:44.491883 kernel: NET: Registered PF_XDP protocol family Nov 6 05:25:44.492042 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 6 05:25:44.492210 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 6 05:25:44.492379 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 05:25:44.492550 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 05:25:44.492703 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 05:25:44.492852 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 6 05:25:44.493002 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 6 05:25:44.493170 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 6 05:25:44.493188 kernel: PCI: CLS 0 bytes, default 64 Nov 6 05:25:44.493202 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 6 05:25:44.493220 kernel: Initialise system trusted keyrings Nov 6 05:25:44.493232 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 6 05:25:44.493247 kernel: Key type asymmetric registered Nov 6 05:25:44.493259 kernel: Asymmetric key parser 'x509' registered Nov 6 05:25:44.493271 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 05:25:44.493283 kernel: io scheduler mq-deadline registered Nov 6 05:25:44.493295 kernel: io scheduler kyber registered Nov 6 05:25:44.493307 kernel: io scheduler bfq registered Nov 6 05:25:44.493319 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 05:25:44.493333 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 6 05:25:44.493347 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 6 05:25:44.493361 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 6 05:25:44.493376 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 05:25:44.493388 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 05:25:44.493401 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 05:25:44.493413 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 05:25:44.493425 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 05:25:44.493623 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 6 05:25:44.493641 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 05:25:44.493792 kernel: rtc_cmos 00:04: registered as rtc0 Nov 6 05:25:44.493947 kernel: rtc_cmos 00:04: setting system clock to 2025-11-06T05:25:42 UTC (1762406742) Nov 6 05:25:44.494093 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 6 05:25:44.494109 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 6 05:25:44.494121 kernel: efifb: probing for efifb Nov 6 05:25:44.494142 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 6 05:25:44.494155 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 6 05:25:44.494167 kernel: efifb: scrolling: redraw Nov 6 05:25:44.494179 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 6 05:25:44.494196 kernel: Console: switching to colour frame buffer device 160x50 Nov 6 05:25:44.494208 kernel: fb0: EFI VGA frame buffer device Nov 6 05:25:44.494232 kernel: pstore: Using crash dump compression: deflate Nov 6 05:25:44.494245 kernel: pstore: Registered efi_pstore as persistent store backend Nov 6 05:25:44.494257 kernel: NET: Registered PF_INET6 protocol family Nov 6 05:25:44.494269 kernel: Segment Routing with IPv6 Nov 6 05:25:44.494281 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 05:25:44.494293 kernel: NET: Registered PF_PACKET protocol family Nov 6 05:25:44.494305 kernel: Key type dns_resolver registered Nov 6 05:25:44.494316 kernel: IPI shorthand broadcast: enabled Nov 6 05:25:44.494332 kernel: sched_clock: Marking stable (2467002875, 301196113)->(2949146051, -180947063) Nov 6 05:25:44.494344 kernel: registered taskstats version 1 Nov 6 05:25:44.494356 kernel: Loading compiled-in X.509 certificates Nov 6 05:25:44.494368 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: edee08bd79f57120bcf336d97df00a0ad5e85412' Nov 6 05:25:44.494380 kernel: Demotion targets for Node 0: null Nov 6 05:25:44.494392 kernel: Key type .fscrypt registered Nov 6 05:25:44.494404 kernel: Key type fscrypt-provisioning registered Nov 6 05:25:44.494416 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 05:25:44.494427 kernel: ima: Allocated hash algorithm: sha1 Nov 6 05:25:44.494442 kernel: ima: No architecture policies found Nov 6 05:25:44.494454 kernel: clk: Disabling unused clocks Nov 6 05:25:44.494466 kernel: Freeing unused kernel image (initmem) memory: 15356K Nov 6 05:25:44.494478 kernel: Write protecting the kernel read-only data: 45056k Nov 6 05:25:44.494490 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 6 05:25:44.494502 kernel: Run /init as init process Nov 6 05:25:44.494529 kernel: with arguments: Nov 6 05:25:44.494542 kernel: /init Nov 6 05:25:44.494553 kernel: with environment: Nov 6 05:25:44.494569 kernel: HOME=/ Nov 6 05:25:44.494581 kernel: TERM=linux Nov 6 05:25:44.494593 kernel: SCSI subsystem initialized Nov 6 05:25:44.494604 kernel: libata version 3.00 loaded. Nov 6 05:25:44.494770 kernel: ahci 0000:00:1f.2: version 3.0 Nov 6 05:25:44.494786 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 6 05:25:44.494935 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 6 05:25:44.495086 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 6 05:25:44.495260 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 6 05:25:44.495465 kernel: scsi host0: ahci Nov 6 05:25:44.495664 kernel: scsi host1: ahci Nov 6 05:25:44.495844 kernel: scsi host2: ahci Nov 6 05:25:44.496019 kernel: scsi host3: ahci Nov 6 05:25:44.496207 kernel: scsi host4: ahci Nov 6 05:25:44.496385 kernel: scsi host5: ahci Nov 6 05:25:44.496409 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 6 05:25:44.496422 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 6 05:25:44.496434 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 6 05:25:44.496446 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 6 05:25:44.496459 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 6 05:25:44.496471 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 6 05:25:44.496483 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 6 05:25:44.496495 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 6 05:25:44.496510 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 6 05:25:44.496538 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 6 05:25:44.496550 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 6 05:25:44.496562 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 6 05:25:44.496574 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 05:25:44.496586 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 6 05:25:44.496598 kernel: ata3.00: applying bridge limits Nov 6 05:25:44.496610 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 05:25:44.496621 kernel: ata3.00: configured for UDMA/100 Nov 6 05:25:44.496832 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 6 05:25:44.497012 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 6 05:25:44.497176 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 6 05:25:44.497192 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 05:25:44.497204 kernel: GPT:16515071 != 27000831 Nov 6 05:25:44.497216 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 05:25:44.497227 kernel: GPT:16515071 != 27000831 Nov 6 05:25:44.497243 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 05:25:44.497255 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 05:25:44.497434 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 6 05:25:44.497450 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 05:25:44.497643 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 6 05:25:44.497659 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 05:25:44.497671 kernel: device-mapper: uevent: version 1.0.3 Nov 6 05:25:44.497684 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 05:25:44.497696 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 6 05:25:44.497713 kernel: raid6: avx2x4 gen() 30104 MB/s Nov 6 05:25:44.497725 kernel: raid6: avx2x2 gen() 30464 MB/s Nov 6 05:25:44.497737 kernel: raid6: avx2x1 gen() 25650 MB/s Nov 6 05:25:44.497748 kernel: raid6: using algorithm avx2x2 gen() 30464 MB/s Nov 6 05:25:44.497760 kernel: raid6: .... xor() 19787 MB/s, rmw enabled Nov 6 05:25:44.497772 kernel: raid6: using avx2x2 recovery algorithm Nov 6 05:25:44.497785 kernel: xor: automatically using best checksumming function avx Nov 6 05:25:44.497797 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 05:25:44.497809 kernel: BTRFS: device fsid b5cf1d69-dae6-4f65-bb6f-44a747495a60 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (181) Nov 6 05:25:44.497824 kernel: BTRFS info (device dm-0): first mount of filesystem b5cf1d69-dae6-4f65-bb6f-44a747495a60 Nov 6 05:25:44.497836 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 05:25:44.497848 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 05:25:44.497860 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 05:25:44.497872 kernel: loop: module loaded Nov 6 05:25:44.497884 kernel: loop0: detected capacity change from 0 to 101000 Nov 6 05:25:44.497896 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 05:25:44.497909 systemd[1]: Successfully made /usr/ read-only. Nov 6 05:25:44.497928 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 05:25:44.497941 systemd[1]: Detected virtualization kvm. Nov 6 05:25:44.497953 systemd[1]: Detected architecture x86-64. Nov 6 05:25:44.497965 systemd[1]: Running in initrd. Nov 6 05:25:44.497978 systemd[1]: No hostname configured, using default hostname. Nov 6 05:25:44.497993 systemd[1]: Hostname set to . Nov 6 05:25:44.498006 systemd[1]: Initializing machine ID from VM UUID. Nov 6 05:25:44.498021 systemd[1]: Queued start job for default target initrd.target. Nov 6 05:25:44.498033 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 05:25:44.498046 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 05:25:44.498059 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 05:25:44.498073 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 05:25:44.498086 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 05:25:44.498099 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 05:25:44.498115 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 05:25:44.498128 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 05:25:44.498150 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 05:25:44.498163 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 05:25:44.498176 systemd[1]: Reached target paths.target - Path Units. Nov 6 05:25:44.498188 systemd[1]: Reached target slices.target - Slice Units. Nov 6 05:25:44.498201 systemd[1]: Reached target swap.target - Swaps. Nov 6 05:25:44.498214 systemd[1]: Reached target timers.target - Timer Units. Nov 6 05:25:44.498226 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 05:25:44.498242 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 05:25:44.498255 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 05:25:44.498268 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 05:25:44.498280 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 05:25:44.498293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 05:25:44.498306 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 05:25:44.498318 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 05:25:44.498331 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 05:25:44.498347 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 05:25:44.498359 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 05:25:44.498372 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 05:25:44.498386 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 05:25:44.498398 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 05:25:44.498411 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 05:25:44.498424 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 05:25:44.498436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 05:25:44.498453 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 05:25:44.498465 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 05:25:44.498478 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 05:25:44.498491 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 05:25:44.498606 systemd-journald[317]: Collecting audit messages is disabled. Nov 6 05:25:44.498640 systemd-journald[317]: Journal started Nov 6 05:25:44.498666 systemd-journald[317]: Runtime Journal (/run/log/journal/7d4eca58afb943c7b77a37b89ad4bc26) is 6M, max 48.1M, 42M free. Nov 6 05:25:44.501859 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 05:25:44.506647 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 05:25:44.507821 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 05:25:44.512653 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 05:25:44.527541 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 05:25:44.530542 kernel: Bridge firewalling registered Nov 6 05:25:44.530529 systemd-modules-load[321]: Inserted module 'br_netfilter' Nov 6 05:25:44.549746 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 05:25:44.552073 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 05:25:44.559200 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 05:25:44.564107 systemd-tmpfiles[332]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 05:25:44.564907 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 05:25:44.577669 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 05:25:44.577999 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 05:25:44.595109 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 05:25:44.597026 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 05:25:44.606817 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 05:25:44.610225 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 05:25:44.637096 dracut-cmdline[360]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=42c7eeb79a8ee89597bba4204806137326be9acdbca65a8fd923766f65b62f69 Nov 6 05:25:44.667899 systemd-resolved[353]: Positive Trust Anchors: Nov 6 05:25:44.667919 systemd-resolved[353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 05:25:44.667959 systemd-resolved[353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 05:25:44.704807 systemd-resolved[353]: Defaulting to hostname 'linux'. Nov 6 05:25:44.706620 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 05:25:44.706804 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 05:25:44.753536 kernel: Loading iSCSI transport class v2.0-870. Nov 6 05:25:44.767533 kernel: iscsi: registered transport (tcp) Nov 6 05:25:44.791792 kernel: iscsi: registered transport (qla4xxx) Nov 6 05:25:44.791848 kernel: QLogic iSCSI HBA Driver Nov 6 05:25:44.820056 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 05:25:44.896952 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 05:25:44.902339 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 05:25:44.961928 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 05:25:44.963583 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 05:25:44.968750 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 05:25:45.018634 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 05:25:45.024142 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 05:25:45.064276 systemd-udevd[595]: Using default interface naming scheme 'v255'. Nov 6 05:25:45.080314 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 05:25:45.086599 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 05:25:45.115669 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 05:25:45.121783 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 05:25:45.125006 dracut-pre-trigger[675]: rd.md=0: removing MD RAID activation Nov 6 05:25:45.156717 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 05:25:45.160160 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 05:25:45.190455 systemd-networkd[707]: lo: Link UP Nov 6 05:25:45.190467 systemd-networkd[707]: lo: Gained carrier Nov 6 05:25:45.191090 systemd-networkd[707]: Enumeration completed Nov 6 05:25:45.191271 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 05:25:45.193882 systemd[1]: Reached target network.target - Network. Nov 6 05:25:45.272557 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 05:25:45.278975 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 05:25:45.355321 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 05:25:45.370382 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 05:25:45.386678 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 05:25:45.398249 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 05:25:45.407754 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 05:25:45.413548 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 6 05:25:45.419570 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 05:25:45.426796 kernel: AES CTR mode by8 optimization enabled Nov 6 05:25:45.426825 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 05:25:45.437943 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 05:25:45.440011 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 05:25:45.446434 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 05:25:45.446449 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 05:25:45.447035 systemd-networkd[707]: eth0: Link UP Nov 6 05:25:45.448607 systemd-networkd[707]: eth0: Gained carrier Nov 6 05:25:45.448617 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 05:25:45.453294 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 05:25:45.463374 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 05:25:45.469578 systemd-networkd[707]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 05:25:45.474859 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 05:25:45.474986 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 05:25:45.480560 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 05:25:45.482720 disk-uuid[833]: Primary Header is updated. Nov 6 05:25:45.482720 disk-uuid[833]: Secondary Entries is updated. Nov 6 05:25:45.482720 disk-uuid[833]: Secondary Header is updated. Nov 6 05:25:45.486687 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 05:25:45.489975 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 05:25:45.511549 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 05:25:45.531161 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 05:25:45.752594 systemd-resolved[353]: Detected conflict on linux IN A 10.0.0.73 Nov 6 05:25:45.752615 systemd-resolved[353]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Nov 6 05:25:46.537902 disk-uuid[837]: Warning: The kernel is still using the old partition table. Nov 6 05:25:46.537902 disk-uuid[837]: The new table will be used at the next reboot or after you Nov 6 05:25:46.537902 disk-uuid[837]: run partprobe(8) or kpartx(8) Nov 6 05:25:46.537902 disk-uuid[837]: The operation has completed successfully. Nov 6 05:25:46.549122 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 05:25:46.549284 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 05:25:46.554144 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 05:25:46.605556 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (859) Nov 6 05:25:46.605610 kernel: BTRFS info (device vda6): first mount of filesystem 8a1691a9-0f9b-492f-9a94-8ffa2a579e5c Nov 6 05:25:46.608494 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 05:25:46.612494 kernel: BTRFS info (device vda6): turning on async discard Nov 6 05:25:46.612529 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 05:25:46.621544 kernel: BTRFS info (device vda6): last unmount of filesystem 8a1691a9-0f9b-492f-9a94-8ffa2a579e5c Nov 6 05:25:46.622943 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 05:25:46.628042 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 05:25:47.029423 ignition[878]: Ignition 2.22.0 Nov 6 05:25:47.029441 ignition[878]: Stage: fetch-offline Nov 6 05:25:47.029546 ignition[878]: no configs at "/usr/lib/ignition/base.d" Nov 6 05:25:47.029564 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 05:25:47.029745 ignition[878]: parsed url from cmdline: "" Nov 6 05:25:47.029749 ignition[878]: no config URL provided Nov 6 05:25:47.029756 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 05:25:47.029769 ignition[878]: no config at "/usr/lib/ignition/user.ign" Nov 6 05:25:47.029827 ignition[878]: op(1): [started] loading QEMU firmware config module Nov 6 05:25:47.029832 ignition[878]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 6 05:25:47.049035 ignition[878]: op(1): [finished] loading QEMU firmware config module Nov 6 05:25:47.127669 ignition[878]: parsing config with SHA512: 768e0e8fca5cd682a52bbdef6b8c19dd2b4276700ec05f7a6fe597955b46e2350d52753fb52f2a39cc3d8f1c16565a8b4249b9d88971238e6c0a2e6eb303b414 Nov 6 05:25:47.133826 unknown[878]: fetched base config from "system" Nov 6 05:25:47.133841 unknown[878]: fetched user config from "qemu" Nov 6 05:25:47.134387 ignition[878]: fetch-offline: fetch-offline passed Nov 6 05:25:47.138740 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 05:25:47.134469 ignition[878]: Ignition finished successfully Nov 6 05:25:47.139247 systemd-networkd[707]: eth0: Gained IPv6LL Nov 6 05:25:47.141937 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 6 05:25:47.143931 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 05:25:47.249289 ignition[888]: Ignition 2.22.0 Nov 6 05:25:47.249305 ignition[888]: Stage: kargs Nov 6 05:25:47.249458 ignition[888]: no configs at "/usr/lib/ignition/base.d" Nov 6 05:25:47.249468 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 05:25:47.255378 ignition[888]: kargs: kargs passed Nov 6 05:25:47.256578 ignition[888]: Ignition finished successfully Nov 6 05:25:47.260680 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 05:25:47.265211 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 05:25:47.352757 ignition[896]: Ignition 2.22.0 Nov 6 05:25:47.352770 ignition[896]: Stage: disks Nov 6 05:25:47.352938 ignition[896]: no configs at "/usr/lib/ignition/base.d" Nov 6 05:25:47.352949 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 05:25:47.353767 ignition[896]: disks: disks passed Nov 6 05:25:47.353821 ignition[896]: Ignition finished successfully Nov 6 05:25:47.363198 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 05:25:47.363536 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 05:25:47.368303 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 05:25:47.370093 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 05:25:47.375593 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 05:25:47.375895 systemd[1]: Reached target basic.target - Basic System. Nov 6 05:25:47.381178 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 05:25:47.430488 systemd-fsck[906]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 6 05:25:47.524196 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 05:25:47.529550 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 05:25:47.682545 kernel: EXT4-fs (vda9): mounted filesystem 05065f18-b1e1-4b9e-83f5-1a1189e0d083 r/w with ordered data mode. Quota mode: none. Nov 6 05:25:47.683091 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 05:25:47.685097 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 05:25:47.688830 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 05:25:47.691724 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 05:25:47.694238 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 05:25:47.694280 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 05:25:47.694304 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 05:25:47.710401 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 05:25:47.712346 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 05:25:47.718160 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (914) Nov 6 05:25:47.718188 kernel: BTRFS info (device vda6): first mount of filesystem 8a1691a9-0f9b-492f-9a94-8ffa2a579e5c Nov 6 05:25:47.718205 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 05:25:47.720550 kernel: BTRFS info (device vda6): turning on async discard Nov 6 05:25:47.720664 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 05:25:47.728724 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 05:25:47.787722 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 05:25:47.792334 initrd-setup-root[945]: cut: /sysroot/etc/group: No such file or directory Nov 6 05:25:47.796525 initrd-setup-root[952]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 05:25:47.801959 initrd-setup-root[959]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 05:25:47.909233 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 05:25:47.914441 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 05:25:47.917044 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 05:25:47.944827 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 05:25:47.949573 kernel: BTRFS info (device vda6): last unmount of filesystem 8a1691a9-0f9b-492f-9a94-8ffa2a579e5c Nov 6 05:25:47.973676 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 05:25:48.005474 ignition[1029]: INFO : Ignition 2.22.0 Nov 6 05:25:48.005474 ignition[1029]: INFO : Stage: mount Nov 6 05:25:48.008183 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 05:25:48.008183 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 05:25:48.008183 ignition[1029]: INFO : mount: mount passed Nov 6 05:25:48.008183 ignition[1029]: INFO : Ignition finished successfully Nov 6 05:25:48.016777 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 05:25:48.018500 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 05:25:48.685173 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 05:25:48.709559 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1042) Nov 6 05:25:48.713304 kernel: BTRFS info (device vda6): first mount of filesystem 8a1691a9-0f9b-492f-9a94-8ffa2a579e5c Nov 6 05:25:48.713353 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 05:25:48.717538 kernel: BTRFS info (device vda6): turning on async discard Nov 6 05:25:48.717569 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 05:25:48.719324 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 05:25:48.773244 ignition[1059]: INFO : Ignition 2.22.0 Nov 6 05:25:48.773244 ignition[1059]: INFO : Stage: files Nov 6 05:25:48.776088 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 05:25:48.776088 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 05:25:48.776088 ignition[1059]: DEBUG : files: compiled without relabeling support, skipping Nov 6 05:25:48.781792 ignition[1059]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 05:25:48.781792 ignition[1059]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 05:25:48.786627 ignition[1059]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 05:25:48.786627 ignition[1059]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 05:25:48.786627 ignition[1059]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 05:25:48.786627 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 05:25:48.786627 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 05:25:48.783653 unknown[1059]: wrote ssh authorized keys file for user: core Nov 6 05:25:48.837918 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 05:25:48.902833 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 05:25:48.902833 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 6 05:25:48.909726 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 05:25:48.909726 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 05:25:48.909726 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 05:25:48.909726 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 05:25:48.909726 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 05:25:48.909726 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 05:25:48.909726 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 05:25:48.909726 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 05:25:48.909726 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 05:25:48.909726 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 05:25:48.940937 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 05:25:48.940937 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 05:25:48.940937 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 6 05:25:49.356977 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 6 05:25:49.910897 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 05:25:49.910897 ignition[1059]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 6 05:25:49.917203 ignition[1059]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 05:25:49.917203 ignition[1059]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 05:25:49.917203 ignition[1059]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 6 05:25:49.917203 ignition[1059]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 6 05:25:49.917203 ignition[1059]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 05:25:49.917203 ignition[1059]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 05:25:49.917203 ignition[1059]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 6 05:25:49.917203 ignition[1059]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 6 05:25:49.948545 ignition[1059]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 05:25:49.955714 ignition[1059]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 05:25:49.958905 ignition[1059]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 6 05:25:49.958905 ignition[1059]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 6 05:25:49.964090 ignition[1059]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 05:25:49.964090 ignition[1059]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 05:25:49.964090 ignition[1059]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 05:25:49.964090 ignition[1059]: INFO : files: files passed Nov 6 05:25:49.964090 ignition[1059]: INFO : Ignition finished successfully Nov 6 05:25:49.965697 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 05:25:49.968325 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 05:25:49.976670 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 05:25:49.995246 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 05:25:49.995389 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 05:25:50.006173 initrd-setup-root-after-ignition[1090]: grep: /sysroot/oem/oem-release: No such file or directory Nov 6 05:25:50.012308 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 05:25:50.012308 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 05:25:50.017649 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 05:25:50.021342 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 05:25:50.021642 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 05:25:50.029174 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 05:25:50.113021 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 05:25:50.113147 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 05:25:50.116872 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 05:25:50.118753 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 05:25:50.122478 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 05:25:50.127851 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 05:25:50.174018 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 05:25:50.175630 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 05:25:50.200837 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 05:25:50.201087 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 05:25:50.204864 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 05:25:50.208709 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 05:25:50.212041 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 05:25:50.212153 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 05:25:50.213786 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 05:25:50.214334 systemd[1]: Stopped target basic.target - Basic System. Nov 6 05:25:50.221468 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 05:25:50.224790 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 05:25:50.228042 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 05:25:50.231564 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 05:25:50.232184 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 05:25:50.232730 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 05:25:50.233283 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 05:25:50.244573 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 05:25:50.248193 systemd[1]: Stopped target swap.target - Swaps. Nov 6 05:25:50.251083 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 05:25:50.251195 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 05:25:50.255425 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 05:25:50.256103 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 05:25:50.262009 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 05:25:50.263667 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 05:25:50.264048 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 05:25:50.264155 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 05:25:50.272313 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 05:25:50.272440 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 05:25:50.275877 systemd[1]: Stopped target paths.target - Path Units. Nov 6 05:25:50.278835 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 05:25:50.283603 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 05:25:50.285892 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 05:25:50.289312 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 05:25:50.293789 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 05:25:50.294035 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 05:25:50.297067 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 05:25:50.297153 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 05:25:50.300419 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 05:25:50.300602 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 05:25:50.304206 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 05:25:50.304404 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 05:25:50.309388 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 05:25:50.312324 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 05:25:50.312447 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 05:25:50.317177 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 05:25:50.320035 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 05:25:50.320202 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 05:25:50.323755 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 05:25:50.323907 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 05:25:50.345599 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 05:25:50.347647 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 05:25:50.371236 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 05:25:50.385490 ignition[1116]: INFO : Ignition 2.22.0 Nov 6 05:25:50.385490 ignition[1116]: INFO : Stage: umount Nov 6 05:25:50.388998 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 05:25:50.388998 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 05:25:50.388998 ignition[1116]: INFO : umount: umount passed Nov 6 05:25:50.388998 ignition[1116]: INFO : Ignition finished successfully Nov 6 05:25:50.399379 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 05:25:50.399608 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 05:25:50.400400 systemd[1]: Stopped target network.target - Network. Nov 6 05:25:50.404983 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 05:25:50.405060 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 05:25:50.405658 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 05:25:50.405759 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 05:25:50.406417 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 05:25:50.406527 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 05:25:50.417067 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 05:25:50.417218 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 05:25:50.418717 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 05:25:50.422084 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 05:25:50.439486 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 05:25:50.439693 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 05:25:50.447791 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 05:25:50.448071 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 05:25:50.448209 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 05:25:50.455864 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 05:25:50.456245 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 05:25:50.456362 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 05:25:50.460670 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 05:25:50.462853 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 05:25:50.462936 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 05:25:50.466592 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 05:25:50.466670 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 05:25:50.473624 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 05:25:50.475392 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 05:25:50.475475 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 05:25:50.476059 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 05:25:50.476107 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 05:25:50.485509 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 05:25:50.485588 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 05:25:50.489244 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 05:25:50.489304 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 05:25:50.495665 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 05:25:50.500677 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 05:25:50.500740 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 05:25:50.520039 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 05:25:50.520251 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 05:25:50.525132 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 05:25:50.525249 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 05:25:50.527045 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 05:25:50.527103 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 05:25:50.533477 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 05:25:50.533635 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 05:25:50.538747 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 05:25:50.538835 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 05:25:50.546154 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 05:25:50.546228 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 05:25:50.551112 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 05:25:50.553861 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 05:25:50.553937 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 05:25:50.558045 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 05:25:50.558123 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 05:25:50.564040 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 05:25:50.564120 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 05:25:50.572903 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 6 05:25:50.572968 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 05:25:50.573079 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 05:25:50.573534 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 05:25:50.573667 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 05:25:50.581981 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 05:25:50.582131 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 05:25:50.585180 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 05:25:50.588967 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 05:25:50.628980 systemd[1]: Switching root. Nov 6 05:25:50.674057 systemd-journald[317]: Journal stopped Nov 6 05:25:52.012228 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Nov 6 05:25:52.012304 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 05:25:52.012319 kernel: SELinux: policy capability open_perms=1 Nov 6 05:25:52.012331 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 05:25:52.012348 kernel: SELinux: policy capability always_check_network=0 Nov 6 05:25:52.012365 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 05:25:52.012381 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 05:25:52.012393 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 05:25:52.012404 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 05:25:52.012420 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 05:25:52.012437 kernel: audit: type=1403 audit(1762406751.092:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 05:25:52.012453 systemd[1]: Successfully loaded SELinux policy in 69.410ms. Nov 6 05:25:52.012475 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.236ms. Nov 6 05:25:52.012494 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 05:25:52.012508 systemd[1]: Detected virtualization kvm. Nov 6 05:25:52.012575 systemd[1]: Detected architecture x86-64. Nov 6 05:25:52.012592 systemd[1]: Detected first boot. Nov 6 05:25:52.012604 systemd[1]: Initializing machine ID from VM UUID. Nov 6 05:25:52.012617 zram_generator::config[1161]: No configuration found. Nov 6 05:25:52.012630 kernel: Guest personality initialized and is inactive Nov 6 05:25:52.012642 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 05:25:52.012659 kernel: Initialized host personality Nov 6 05:25:52.012671 kernel: NET: Registered PF_VSOCK protocol family Nov 6 05:25:52.012683 systemd[1]: Populated /etc with preset unit settings. Nov 6 05:25:52.012696 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 05:25:52.012708 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 05:25:52.012721 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 05:25:52.012734 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 05:25:52.012746 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 05:25:52.012759 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 05:25:52.012776 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 05:25:52.012788 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 05:25:52.012801 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 05:25:52.012813 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 05:25:52.012826 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 05:25:52.012838 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 05:25:52.012851 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 05:25:52.012873 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 05:25:52.012890 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 05:25:52.012914 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 05:25:52.012927 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 05:25:52.012940 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 05:25:52.012960 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 05:25:52.012973 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 05:25:52.012986 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 05:25:52.012998 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 05:25:52.013016 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 05:25:52.013028 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 05:25:52.013041 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 05:25:52.013053 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 05:25:52.013065 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 05:25:52.013077 systemd[1]: Reached target slices.target - Slice Units. Nov 6 05:25:52.013089 systemd[1]: Reached target swap.target - Swaps. Nov 6 05:25:52.013101 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 05:25:52.013114 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 05:25:52.013131 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 05:25:52.013145 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 05:25:52.013158 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 05:25:52.013177 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 05:25:52.013189 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 05:25:52.013201 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 05:25:52.013213 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 05:25:52.013226 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 05:25:52.013238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:25:52.013255 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 05:25:52.013268 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 05:25:52.013280 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 05:25:52.013293 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 05:25:52.013305 systemd[1]: Reached target machines.target - Containers. Nov 6 05:25:52.013318 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 05:25:52.013330 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 05:25:52.013343 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 05:25:52.013355 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 05:25:52.013372 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 05:25:52.013385 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 05:25:52.013397 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 05:25:52.013410 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 05:25:52.013422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 05:25:52.013435 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 05:25:52.013451 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 05:25:52.013463 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 05:25:52.013482 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 05:25:52.013504 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 05:25:52.013541 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 05:25:52.013556 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 05:25:52.013571 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 05:25:52.013586 kernel: ACPI: bus type drm_connector registered Nov 6 05:25:52.013600 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 05:25:52.013612 kernel: fuse: init (API version 7.41) Nov 6 05:25:52.013624 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 05:25:52.013644 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 05:25:52.013683 systemd-journald[1243]: Collecting audit messages is disabled. Nov 6 05:25:52.013707 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 05:25:52.013723 systemd-journald[1243]: Journal started Nov 6 05:25:52.013760 systemd-journald[1243]: Runtime Journal (/run/log/journal/7d4eca58afb943c7b77a37b89ad4bc26) is 6M, max 48.1M, 42M free. Nov 6 05:25:51.701844 systemd[1]: Queued start job for default target multi-user.target. Nov 6 05:25:51.724536 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 05:25:51.725059 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 05:25:52.020556 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:25:52.025543 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 05:25:52.027383 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 05:25:52.029185 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 05:25:52.031096 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 05:25:52.032781 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 05:25:52.034673 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 05:25:52.036601 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 05:25:52.038464 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 05:25:52.040686 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 05:25:52.042985 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 05:25:52.043214 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 05:25:52.045380 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 05:25:52.045614 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 05:25:52.047728 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 05:25:52.047941 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 05:25:52.049923 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 05:25:52.050148 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 05:25:52.052366 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 05:25:52.052597 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 05:25:52.054623 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 05:25:52.054854 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 05:25:52.056925 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 05:25:52.059016 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 05:25:52.061326 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 05:25:52.063686 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 05:25:52.081748 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 05:25:52.085069 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 05:25:52.087854 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 05:25:52.090150 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 05:25:52.090180 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 05:25:52.093125 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 05:25:52.108722 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 05:25:52.111728 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 05:25:52.114103 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 05:25:52.118068 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 05:25:52.120240 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 05:25:52.123633 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 05:25:52.125721 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 05:25:52.133833 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 05:25:52.140632 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 05:25:52.143733 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 05:25:52.148227 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 05:25:52.150890 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 05:25:52.153019 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 05:25:52.157010 systemd-journald[1243]: Time spent on flushing to /var/log/journal/7d4eca58afb943c7b77a37b89ad4bc26 is 41.480ms for 1063 entries. Nov 6 05:25:52.157010 systemd-journald[1243]: System Journal (/var/log/journal/7d4eca58afb943c7b77a37b89ad4bc26) is 8M, max 163.5M, 155.5M free. Nov 6 05:25:52.219680 systemd-journald[1243]: Received client request to flush runtime journal. Nov 6 05:25:52.219737 kernel: loop1: detected capacity change from 0 to 111544 Nov 6 05:25:52.159764 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 05:25:52.169309 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 05:25:52.175835 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 05:25:52.190411 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 05:25:52.223190 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 05:25:52.225935 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 05:25:52.228428 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 05:25:52.234978 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 05:25:52.250536 kernel: loop2: detected capacity change from 0 to 229808 Nov 6 05:25:52.270195 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Nov 6 05:25:52.270724 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Nov 6 05:25:52.279718 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 05:25:52.289543 kernel: loop3: detected capacity change from 0 to 119080 Nov 6 05:25:52.415546 kernel: loop4: detected capacity change from 0 to 111544 Nov 6 05:25:52.432546 kernel: loop5: detected capacity change from 0 to 229808 Nov 6 05:25:52.448540 kernel: loop6: detected capacity change from 0 to 119080 Nov 6 05:25:52.468932 (sd-merge)[1302]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 6 05:25:52.469589 (sd-merge)[1302]: Merged extensions into '/usr'. Nov 6 05:25:52.474219 systemd[1]: Reload requested from client PID 1280 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 05:25:52.474537 systemd[1]: Reloading... Nov 6 05:25:52.789565 zram_generator::config[1325]: No configuration found. Nov 6 05:25:52.822469 ldconfig[1275]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 05:25:52.986979 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 05:25:52.987623 systemd[1]: Reloading finished in 512 ms. Nov 6 05:25:53.019499 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 05:25:53.021703 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 05:25:53.023919 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 05:25:53.045175 systemd[1]: Starting ensure-sysext.service... Nov 6 05:25:53.047652 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 05:25:53.063257 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 05:25:53.074774 systemd[1]: Reload requested from client PID 1368 ('systemctl') (unit ensure-sysext.service)... Nov 6 05:25:53.074789 systemd[1]: Reloading... Nov 6 05:25:53.081037 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 05:25:53.081073 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 05:25:53.081381 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 05:25:53.081735 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 05:25:53.082688 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 05:25:53.082979 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Nov 6 05:25:53.083053 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Nov 6 05:25:53.087631 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 05:25:53.087643 systemd-tmpfiles[1369]: Skipping /boot Nov 6 05:25:53.096087 systemd-udevd[1370]: Using default interface naming scheme 'v255'. Nov 6 05:25:53.099373 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 05:25:53.099387 systemd-tmpfiles[1369]: Skipping /boot Nov 6 05:25:53.127560 zram_generator::config[1397]: No configuration found. Nov 6 05:25:53.229578 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 05:25:53.247543 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 6 05:25:53.253042 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 6 05:25:53.253298 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 6 05:25:53.253552 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 6 05:25:53.258075 kernel: ACPI: button: Power Button [PWRF] Nov 6 05:25:53.356418 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 05:25:53.358890 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 05:25:53.359121 systemd[1]: Reloading finished in 283 ms. Nov 6 05:25:53.402590 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 05:25:53.419968 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 05:25:53.426449 kernel: kvm_amd: TSC scaling supported Nov 6 05:25:53.426481 kernel: kvm_amd: Nested Virtualization enabled Nov 6 05:25:53.426494 kernel: kvm_amd: Nested Paging enabled Nov 6 05:25:53.428127 kernel: kvm_amd: LBR virtualization supported Nov 6 05:25:53.428153 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 6 05:25:53.429046 kernel: kvm_amd: Virtual GIF supported Nov 6 05:25:53.461544 kernel: EDAC MC: Ver: 3.0.0 Nov 6 05:25:53.463453 systemd[1]: Finished ensure-sysext.service. Nov 6 05:25:53.480975 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:25:53.482290 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 05:25:53.485815 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 05:25:53.487882 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 05:25:53.495650 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 05:25:53.499052 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 05:25:53.503235 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 05:25:53.507622 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 05:25:53.509721 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 05:25:53.511123 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 05:25:53.512993 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 05:25:53.514233 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 05:25:53.523765 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 05:25:53.529068 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 05:25:53.534872 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 05:25:53.539040 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 05:25:53.540274 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 05:25:53.541017 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:25:53.542214 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 05:25:53.543144 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 05:25:53.545340 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 05:25:53.545630 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 05:25:53.547734 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 05:25:53.549262 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 05:25:53.552025 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 05:25:53.552309 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 05:25:53.555191 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 05:25:53.569094 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 05:25:53.576065 augenrules[1520]: No rules Nov 6 05:25:53.577755 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 05:25:53.578659 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 05:25:53.581876 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 05:25:53.584424 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 05:25:53.584601 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 05:25:53.586546 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 05:25:53.588614 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 05:25:53.599920 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 05:25:53.600391 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 05:25:53.609268 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 05:25:53.624871 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 05:25:53.646225 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 05:25:53.724785 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 05:25:53.726982 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 05:25:53.727161 systemd-networkd[1497]: lo: Link UP Nov 6 05:25:53.727175 systemd-networkd[1497]: lo: Gained carrier Nov 6 05:25:53.729022 systemd-networkd[1497]: Enumeration completed Nov 6 05:25:53.729275 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 05:25:53.729453 systemd-networkd[1497]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 05:25:53.729464 systemd-networkd[1497]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 05:25:53.730072 systemd-networkd[1497]: eth0: Link UP Nov 6 05:25:53.730374 systemd-networkd[1497]: eth0: Gained carrier Nov 6 05:25:53.730393 systemd-networkd[1497]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 05:25:53.732879 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 05:25:53.736197 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 05:25:53.749678 systemd-resolved[1502]: Positive Trust Anchors: Nov 6 05:25:53.749690 systemd-resolved[1502]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 05:25:53.749719 systemd-resolved[1502]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 05:25:53.750186 systemd-networkd[1497]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 05:25:53.750852 systemd-timesyncd[1503]: Network configuration changed, trying to establish connection. Nov 6 05:25:54.215766 systemd-timesyncd[1503]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 6 05:25:54.215827 systemd-timesyncd[1503]: Initial clock synchronization to Thu 2025-11-06 05:25:54.215577 UTC. Nov 6 05:25:54.218813 systemd-resolved[1502]: Defaulting to hostname 'linux'. Nov 6 05:25:54.220562 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 05:25:54.222519 systemd[1]: Reached target network.target - Network. Nov 6 05:25:54.223987 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 05:25:54.225954 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 05:25:54.227774 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 05:25:54.229782 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 05:25:54.231776 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 05:25:54.233777 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 05:25:54.235610 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 05:25:54.237627 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 05:25:54.239636 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 05:25:54.239665 systemd[1]: Reached target paths.target - Path Units. Nov 6 05:25:54.241201 systemd[1]: Reached target timers.target - Timer Units. Nov 6 05:25:54.243539 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 05:25:54.247158 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 05:25:54.250886 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 05:25:54.253066 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 05:25:54.255224 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 05:25:54.266724 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 05:25:54.269278 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 05:25:54.272924 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 05:25:54.275339 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 05:25:54.279215 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 05:25:54.280972 systemd[1]: Reached target basic.target - Basic System. Nov 6 05:25:54.282798 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 05:25:54.282843 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 05:25:54.285044 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 05:25:54.289021 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 05:25:54.294167 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 05:25:54.298019 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 05:25:54.311125 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 05:25:54.313002 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 05:25:54.314674 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 05:25:54.316744 jq[1553]: false Nov 6 05:25:54.318694 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 05:25:54.322816 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 05:25:54.340908 oslogin_cache_refresh[1555]: Refreshing passwd entry cache Nov 6 05:25:54.341078 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 05:25:54.341519 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing passwd entry cache Nov 6 05:25:54.343269 extend-filesystems[1554]: Found /dev/vda6 Nov 6 05:25:54.346322 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 05:25:54.349802 extend-filesystems[1554]: Found /dev/vda9 Nov 6 05:25:54.352619 extend-filesystems[1554]: Checking size of /dev/vda9 Nov 6 05:25:54.354260 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting users, quitting Nov 6 05:25:54.354247 oslogin_cache_refresh[1555]: Failure getting users, quitting Nov 6 05:25:54.354732 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 05:25:54.354732 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing group entry cache Nov 6 05:25:54.354273 oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 05:25:54.354339 oslogin_cache_refresh[1555]: Refreshing group entry cache Nov 6 05:25:54.360065 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting groups, quitting Nov 6 05:25:54.360065 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 05:25:54.360048 oslogin_cache_refresh[1555]: Failure getting groups, quitting Nov 6 05:25:54.360065 oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 05:25:54.362340 extend-filesystems[1554]: Resized partition /dev/vda9 Nov 6 05:25:54.363092 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 05:25:54.366591 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 05:25:54.366809 extend-filesystems[1575]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 05:25:54.367253 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 05:25:54.368584 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 05:25:54.371694 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 05:25:54.378561 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 6 05:25:54.379713 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 05:25:54.382997 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 05:25:54.383275 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 05:25:54.383655 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 05:25:54.383928 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 05:25:54.387020 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 05:25:54.387785 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 05:25:54.390997 jq[1579]: true Nov 6 05:25:54.393941 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 05:25:54.394460 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 05:25:54.404961 update_engine[1577]: I20251106 05:25:54.404191 1577 main.cc:92] Flatcar Update Engine starting Nov 6 05:25:54.425508 jq[1584]: true Nov 6 05:25:54.446626 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 6 05:25:54.451063 dbus-daemon[1551]: [system] SELinux support is enabled Nov 6 05:25:54.488917 update_engine[1577]: I20251106 05:25:54.455073 1577 update_check_scheduler.cc:74] Next update check in 5m52s Nov 6 05:25:54.451977 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 05:25:54.489186 tar[1582]: linux-amd64/LICENSE Nov 6 05:25:54.489186 tar[1582]: linux-amd64/helm Nov 6 05:25:54.457050 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 05:25:54.457076 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 05:25:54.459967 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 05:25:54.459982 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 05:25:54.462682 systemd[1]: Started update-engine.service - Update Engine. Nov 6 05:25:54.466383 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 05:25:54.488142 systemd-logind[1574]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 05:25:54.488164 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 05:25:54.490348 extend-filesystems[1575]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 6 05:25:54.490348 extend-filesystems[1575]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 6 05:25:54.490348 extend-filesystems[1575]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 6 05:25:54.488831 systemd-logind[1574]: New seat seat0. Nov 6 05:25:54.503836 extend-filesystems[1554]: Resized filesystem in /dev/vda9 Nov 6 05:25:54.495294 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 05:25:54.497636 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 05:25:54.498526 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 05:25:54.516502 bash[1613]: Updated "/home/core/.ssh/authorized_keys" Nov 6 05:25:54.526028 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 05:25:54.529018 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 05:25:54.540004 locksmithd[1609]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 05:25:54.746792 sshd_keygen[1586]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 05:25:54.789527 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 05:25:54.794248 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 05:25:54.818508 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 05:25:54.818874 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 05:25:54.823754 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 05:25:54.858148 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 05:25:54.863164 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 05:25:54.868735 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 05:25:54.870808 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 05:25:55.095520 containerd[1594]: time="2025-11-06T05:25:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 05:25:55.096404 containerd[1594]: time="2025-11-06T05:25:55.096340040Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 6 05:25:55.111399 containerd[1594]: time="2025-11-06T05:25:55.111334760Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="21.611µs" Nov 6 05:25:55.111399 containerd[1594]: time="2025-11-06T05:25:55.111382089Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 05:25:55.111495 containerd[1594]: time="2025-11-06T05:25:55.111441681Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 05:25:55.111495 containerd[1594]: time="2025-11-06T05:25:55.111457220Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 05:25:55.111855 containerd[1594]: time="2025-11-06T05:25:55.111809581Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 05:25:55.111855 containerd[1594]: time="2025-11-06T05:25:55.111838425Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 05:25:55.111968 containerd[1594]: time="2025-11-06T05:25:55.111936839Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 05:25:55.111968 containerd[1594]: time="2025-11-06T05:25:55.111958931Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 05:25:55.112322 containerd[1594]: time="2025-11-06T05:25:55.112287467Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 05:25:55.112322 containerd[1594]: time="2025-11-06T05:25:55.112308406Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 05:25:55.112371 containerd[1594]: time="2025-11-06T05:25:55.112326921Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 05:25:55.112371 containerd[1594]: time="2025-11-06T05:25:55.112339725Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 6 05:25:55.113530 containerd[1594]: time="2025-11-06T05:25:55.112780101Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 6 05:25:55.113530 containerd[1594]: time="2025-11-06T05:25:55.112821899Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 05:25:55.113530 containerd[1594]: time="2025-11-06T05:25:55.113113656Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 05:25:55.113530 containerd[1594]: time="2025-11-06T05:25:55.113388322Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 05:25:55.113530 containerd[1594]: time="2025-11-06T05:25:55.113423958Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 05:25:55.113530 containerd[1594]: time="2025-11-06T05:25:55.113434087Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 05:25:55.113530 containerd[1594]: time="2025-11-06T05:25:55.113500502Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 05:25:55.113766 containerd[1594]: time="2025-11-06T05:25:55.113747806Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 05:25:55.113864 containerd[1594]: time="2025-11-06T05:25:55.113831032Z" level=info msg="metadata content store policy set" policy=shared Nov 6 05:25:55.122024 containerd[1594]: time="2025-11-06T05:25:55.121987368Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 05:25:55.122089 containerd[1594]: time="2025-11-06T05:25:55.122035007Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 6 05:25:55.122184 containerd[1594]: time="2025-11-06T05:25:55.122161825Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 6 05:25:55.122214 containerd[1594]: time="2025-11-06T05:25:55.122196029Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 05:25:55.122235 containerd[1594]: time="2025-11-06T05:25:55.122221838Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 05:25:55.122256 containerd[1594]: time="2025-11-06T05:25:55.122235142Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 05:25:55.122256 containerd[1594]: time="2025-11-06T05:25:55.122246804Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 05:25:55.122300 containerd[1594]: time="2025-11-06T05:25:55.122256342Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 05:25:55.122300 containerd[1594]: time="2025-11-06T05:25:55.122268134Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 05:25:55.122300 containerd[1594]: time="2025-11-06T05:25:55.122279586Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 05:25:55.122300 containerd[1594]: time="2025-11-06T05:25:55.122290687Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 05:25:55.122373 containerd[1594]: time="2025-11-06T05:25:55.122302930Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 05:25:55.122373 containerd[1594]: time="2025-11-06T05:25:55.122314261Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 05:25:55.122373 containerd[1594]: time="2025-11-06T05:25:55.122326945Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 05:25:55.122493 containerd[1594]: time="2025-11-06T05:25:55.122455556Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 05:25:55.122518 containerd[1594]: time="2025-11-06T05:25:55.122510619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 05:25:55.122538 containerd[1594]: time="2025-11-06T05:25:55.122525186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 05:25:55.122569 containerd[1594]: time="2025-11-06T05:25:55.122553299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 05:25:55.122569 containerd[1594]: time="2025-11-06T05:25:55.122565141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 05:25:55.122613 containerd[1594]: time="2025-11-06T05:25:55.122575621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 05:25:55.122613 containerd[1594]: time="2025-11-06T05:25:55.122587163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 05:25:55.122613 containerd[1594]: time="2025-11-06T05:25:55.122601990Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 05:25:55.122613 containerd[1594]: time="2025-11-06T05:25:55.122613823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 05:25:55.122686 containerd[1594]: time="2025-11-06T05:25:55.122625515Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 05:25:55.122686 containerd[1594]: time="2025-11-06T05:25:55.122648237Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 05:25:55.122729 containerd[1594]: time="2025-11-06T05:25:55.122699914Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 05:25:55.122800 containerd[1594]: time="2025-11-06T05:25:55.122774574Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 05:25:55.122800 containerd[1594]: time="2025-11-06T05:25:55.122796505Z" level=info msg="Start snapshots syncer" Nov 6 05:25:55.122850 containerd[1594]: time="2025-11-06T05:25:55.122834036Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 05:25:55.123199 containerd[1594]: time="2025-11-06T05:25:55.123148225Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 05:25:55.123363 containerd[1594]: time="2025-11-06T05:25:55.123231761Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 05:25:55.123363 containerd[1594]: time="2025-11-06T05:25:55.123318855Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 05:25:55.123460 containerd[1594]: time="2025-11-06T05:25:55.123435293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 05:25:55.123500 containerd[1594]: time="2025-11-06T05:25:55.123460010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 05:25:55.123500 containerd[1594]: time="2025-11-06T05:25:55.123489284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 05:25:55.123500 containerd[1594]: time="2025-11-06T05:25:55.123500546Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 05:25:55.123566 containerd[1594]: time="2025-11-06T05:25:55.123511686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 05:25:55.123566 containerd[1594]: time="2025-11-06T05:25:55.123523649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 05:25:55.123566 containerd[1594]: time="2025-11-06T05:25:55.123534790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 05:25:55.123566 containerd[1594]: time="2025-11-06T05:25:55.123562131Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 05:25:55.123638 containerd[1594]: time="2025-11-06T05:25:55.123585805Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 05:25:55.123638 containerd[1594]: time="2025-11-06T05:25:55.123625500Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 05:25:55.123689 containerd[1594]: time="2025-11-06T05:25:55.123638685Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 05:25:55.123689 containerd[1594]: time="2025-11-06T05:25:55.123648463Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 05:25:55.123689 containerd[1594]: time="2025-11-06T05:25:55.123658522Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 05:25:55.123689 containerd[1594]: time="2025-11-06T05:25:55.123667639Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 05:25:55.123763 containerd[1594]: time="2025-11-06T05:25:55.123702805Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 05:25:55.123763 containerd[1594]: time="2025-11-06T05:25:55.123715539Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 05:25:55.123763 containerd[1594]: time="2025-11-06T05:25:55.123729916Z" level=info msg="runtime interface created" Nov 6 05:25:55.123763 containerd[1594]: time="2025-11-06T05:25:55.123735697Z" level=info msg="created NRI interface" Nov 6 05:25:55.123763 containerd[1594]: time="2025-11-06T05:25:55.123762276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 05:25:55.123868 containerd[1594]: time="2025-11-06T05:25:55.123775631Z" level=info msg="Connect containerd service" Nov 6 05:25:55.123868 containerd[1594]: time="2025-11-06T05:25:55.123819704Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 05:25:55.124747 containerd[1594]: time="2025-11-06T05:25:55.124708621Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 05:25:55.257635 tar[1582]: linux-amd64/README.md Nov 6 05:25:55.307452 containerd[1594]: time="2025-11-06T05:25:55.307375777Z" level=info msg="Start subscribing containerd event" Nov 6 05:25:55.307591 containerd[1594]: time="2025-11-06T05:25:55.307453333Z" level=info msg="Start recovering state" Nov 6 05:25:55.307633 containerd[1594]: time="2025-11-06T05:25:55.307613483Z" level=info msg="Start event monitor" Nov 6 05:25:55.307633 containerd[1594]: time="2025-11-06T05:25:55.307642367Z" level=info msg="Start cni network conf syncer for default" Nov 6 05:25:55.307697 containerd[1594]: time="2025-11-06T05:25:55.307658958Z" level=info msg="Start streaming server" Nov 6 05:25:55.307697 containerd[1594]: time="2025-11-06T05:25:55.307674738Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 05:25:55.307697 containerd[1594]: time="2025-11-06T05:25:55.307683214Z" level=info msg="runtime interface starting up..." Nov 6 05:25:55.307697 containerd[1594]: time="2025-11-06T05:25:55.307689997Z" level=info msg="starting plugins..." Nov 6 05:25:55.307836 containerd[1594]: time="2025-11-06T05:25:55.307711116Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 05:25:55.308022 containerd[1594]: time="2025-11-06T05:25:55.307985541Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 05:25:55.308072 containerd[1594]: time="2025-11-06T05:25:55.308058558Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 05:25:55.308141 containerd[1594]: time="2025-11-06T05:25:55.308124231Z" level=info msg="containerd successfully booted in 0.213442s" Nov 6 05:25:55.308272 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 05:25:55.312782 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 05:25:55.474800 systemd-networkd[1497]: eth0: Gained IPv6LL Nov 6 05:25:55.478590 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 05:25:55.481197 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 05:25:55.484526 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 6 05:25:55.487726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:25:55.505011 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 05:25:55.533304 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 05:25:55.535889 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 6 05:25:55.536168 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 6 05:25:55.540434 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 05:25:56.611385 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 05:25:56.614762 systemd[1]: Started sshd@0-10.0.0.73:22-10.0.0.1:54116.service - OpenSSH per-connection server daemon (10.0.0.1:54116). Nov 6 05:25:56.716405 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 54116 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:25:56.718422 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:25:56.728612 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 05:25:56.732461 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 05:25:56.742408 systemd-logind[1574]: New session 1 of user core. Nov 6 05:25:56.799205 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 05:25:56.805087 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 05:25:56.825279 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 05:25:56.827988 systemd-logind[1574]: New session c1 of user core. Nov 6 05:25:56.928551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:25:56.930994 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 05:25:56.949808 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 05:25:56.991242 systemd[1685]: Queued start job for default target default.target. Nov 6 05:25:57.000352 systemd[1685]: Created slice app.slice - User Application Slice. Nov 6 05:25:57.000384 systemd[1685]: Reached target paths.target - Paths. Nov 6 05:25:57.000434 systemd[1685]: Reached target timers.target - Timers. Nov 6 05:25:57.002164 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 05:25:57.014948 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 05:25:57.015082 systemd[1685]: Reached target sockets.target - Sockets. Nov 6 05:25:57.015128 systemd[1685]: Reached target basic.target - Basic System. Nov 6 05:25:57.015171 systemd[1685]: Reached target default.target - Main User Target. Nov 6 05:25:57.015210 systemd[1685]: Startup finished in 177ms. Nov 6 05:25:57.015593 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 05:25:57.027608 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 05:25:57.031915 systemd[1]: Startup finished in 3.936s (kernel) + 7.021s (initrd) + 5.543s (userspace) = 16.501s. Nov 6 05:25:57.061682 systemd[1]: Started sshd@1-10.0.0.73:22-10.0.0.1:54130.service - OpenSSH per-connection server daemon (10.0.0.1:54130). Nov 6 05:25:57.113042 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 54130 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:25:57.114368 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:25:57.118855 systemd-logind[1574]: New session 2 of user core. Nov 6 05:25:57.129606 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 05:25:57.143173 sshd[1715]: Connection closed by 10.0.0.1 port 54130 Nov 6 05:25:57.143165 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Nov 6 05:25:57.156086 systemd[1]: sshd@1-10.0.0.73:22-10.0.0.1:54130.service: Deactivated successfully. Nov 6 05:25:57.158129 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 05:25:57.158883 systemd-logind[1574]: Session 2 logged out. Waiting for processes to exit. Nov 6 05:25:57.162287 systemd[1]: Started sshd@2-10.0.0.73:22-10.0.0.1:54140.service - OpenSSH per-connection server daemon (10.0.0.1:54140). Nov 6 05:25:57.163021 systemd-logind[1574]: Removed session 2. Nov 6 05:25:57.239120 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 54140 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:25:57.240303 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:25:57.244820 systemd-logind[1574]: New session 3 of user core. Nov 6 05:25:57.253633 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 05:25:57.262859 sshd[1724]: Connection closed by 10.0.0.1 port 54140 Nov 6 05:25:57.263363 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Nov 6 05:25:57.273090 systemd[1]: sshd@2-10.0.0.73:22-10.0.0.1:54140.service: Deactivated successfully. Nov 6 05:25:57.275194 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 05:25:57.275956 systemd-logind[1574]: Session 3 logged out. Waiting for processes to exit. Nov 6 05:25:57.278989 systemd[1]: Started sshd@3-10.0.0.73:22-10.0.0.1:54156.service - OpenSSH per-connection server daemon (10.0.0.1:54156). Nov 6 05:25:57.279612 systemd-logind[1574]: Removed session 3. Nov 6 05:25:57.338059 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 54156 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:25:57.339797 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:25:57.344529 systemd-logind[1574]: New session 4 of user core. Nov 6 05:25:57.354619 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 05:25:57.369100 sshd[1733]: Connection closed by 10.0.0.1 port 54156 Nov 6 05:25:57.369659 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Nov 6 05:25:57.378179 systemd[1]: sshd@3-10.0.0.73:22-10.0.0.1:54156.service: Deactivated successfully. Nov 6 05:25:57.380342 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 05:25:57.381169 systemd-logind[1574]: Session 4 logged out. Waiting for processes to exit. Nov 6 05:25:57.384303 systemd[1]: Started sshd@4-10.0.0.73:22-10.0.0.1:54170.service - OpenSSH per-connection server daemon (10.0.0.1:54170). Nov 6 05:25:57.385604 systemd-logind[1574]: Removed session 4. Nov 6 05:25:57.438860 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 54170 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:25:57.440179 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:25:57.444804 systemd-logind[1574]: New session 5 of user core. Nov 6 05:25:57.453769 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 05:25:57.476812 kubelet[1696]: E1106 05:25:57.476562 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 05:25:57.480868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 05:25:57.481081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 05:25:57.481491 systemd[1]: kubelet.service: Consumed 1.777s CPU time, 267.4M memory peak. Nov 6 05:25:57.481519 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 05:25:57.481853 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 05:25:57.509581 sudo[1743]: pam_unix(sudo:session): session closed for user root Nov 6 05:25:57.511457 sshd[1742]: Connection closed by 10.0.0.1 port 54170 Nov 6 05:25:57.511847 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Nov 6 05:25:57.528356 systemd[1]: sshd@4-10.0.0.73:22-10.0.0.1:54170.service: Deactivated successfully. Nov 6 05:25:57.530287 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 05:25:57.531115 systemd-logind[1574]: Session 5 logged out. Waiting for processes to exit. Nov 6 05:25:57.534277 systemd[1]: Started sshd@5-10.0.0.73:22-10.0.0.1:54186.service - OpenSSH per-connection server daemon (10.0.0.1:54186). Nov 6 05:25:57.534966 systemd-logind[1574]: Removed session 5. Nov 6 05:25:57.587612 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 54186 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:25:57.588994 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:25:57.593642 systemd-logind[1574]: New session 6 of user core. Nov 6 05:25:57.603612 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 05:25:57.617222 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 05:25:57.617559 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 05:25:57.624761 sudo[1756]: pam_unix(sudo:session): session closed for user root Nov 6 05:25:57.631494 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 05:25:57.631821 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 05:25:57.642245 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 05:25:57.687013 augenrules[1778]: No rules Nov 6 05:25:57.688871 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 05:25:57.689198 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 05:25:57.690502 sudo[1755]: pam_unix(sudo:session): session closed for user root Nov 6 05:25:57.691985 sshd[1754]: Connection closed by 10.0.0.1 port 54186 Nov 6 05:25:57.692364 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Nov 6 05:25:57.705492 systemd[1]: sshd@5-10.0.0.73:22-10.0.0.1:54186.service: Deactivated successfully. Nov 6 05:25:57.707486 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 05:25:57.708213 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. Nov 6 05:25:57.711214 systemd[1]: Started sshd@6-10.0.0.73:22-10.0.0.1:54200.service - OpenSSH per-connection server daemon (10.0.0.1:54200). Nov 6 05:25:57.712098 systemd-logind[1574]: Removed session 6. Nov 6 05:25:57.770956 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 54200 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:25:57.772228 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:25:57.776527 systemd-logind[1574]: New session 7 of user core. Nov 6 05:25:57.786614 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 05:25:57.798178 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 05:25:57.798492 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 05:25:58.732614 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 05:25:58.751843 (dockerd)[1811]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 05:25:59.125660 dockerd[1811]: time="2025-11-06T05:25:59.125484895Z" level=info msg="Starting up" Nov 6 05:25:59.126459 dockerd[1811]: time="2025-11-06T05:25:59.126425028Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 05:25:59.154177 dockerd[1811]: time="2025-11-06T05:25:59.154108584Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 05:25:59.637129 dockerd[1811]: time="2025-11-06T05:25:59.637003966Z" level=info msg="Loading containers: start." Nov 6 05:25:59.653523 kernel: Initializing XFRM netlink socket Nov 6 05:26:00.013903 systemd-networkd[1497]: docker0: Link UP Nov 6 05:26:00.020488 dockerd[1811]: time="2025-11-06T05:26:00.020415509Z" level=info msg="Loading containers: done." Nov 6 05:26:00.043851 dockerd[1811]: time="2025-11-06T05:26:00.043788963Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 05:26:00.044118 dockerd[1811]: time="2025-11-06T05:26:00.043906293Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 05:26:00.044118 dockerd[1811]: time="2025-11-06T05:26:00.044040354Z" level=info msg="Initializing buildkit" Nov 6 05:26:00.080449 dockerd[1811]: time="2025-11-06T05:26:00.080368442Z" level=info msg="Completed buildkit initialization" Nov 6 05:26:00.201255 dockerd[1811]: time="2025-11-06T05:26:00.201160030Z" level=info msg="Daemon has completed initialization" Nov 6 05:26:00.201644 dockerd[1811]: time="2025-11-06T05:26:00.201372278Z" level=info msg="API listen on /run/docker.sock" Nov 6 05:26:00.201557 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 05:26:01.223841 containerd[1594]: time="2025-11-06T05:26:01.223747286Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 6 05:26:02.253803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2487945025.mount: Deactivated successfully. Nov 6 05:26:03.183352 containerd[1594]: time="2025-11-06T05:26:03.183290208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:03.184134 containerd[1594]: time="2025-11-06T05:26:03.184103684Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=28476872" Nov 6 05:26:03.185337 containerd[1594]: time="2025-11-06T05:26:03.185270181Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:03.187899 containerd[1594]: time="2025-11-06T05:26:03.187849228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:03.188924 containerd[1594]: time="2025-11-06T05:26:03.188877907Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.965041193s" Nov 6 05:26:03.188985 containerd[1594]: time="2025-11-06T05:26:03.188926258Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 6 05:26:03.189701 containerd[1594]: time="2025-11-06T05:26:03.189674741Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 6 05:26:04.598011 containerd[1594]: time="2025-11-06T05:26:04.597921597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:04.598782 containerd[1594]: time="2025-11-06T05:26:04.598721507Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26015441" Nov 6 05:26:04.599954 containerd[1594]: time="2025-11-06T05:26:04.599924313Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:04.602819 containerd[1594]: time="2025-11-06T05:26:04.602787993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:04.603879 containerd[1594]: time="2025-11-06T05:26:04.603808216Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.414090986s" Nov 6 05:26:04.603879 containerd[1594]: time="2025-11-06T05:26:04.603861657Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 6 05:26:04.604429 containerd[1594]: time="2025-11-06T05:26:04.604393644Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 6 05:26:06.072692 containerd[1594]: time="2025-11-06T05:26:06.072619372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:06.073451 containerd[1594]: time="2025-11-06T05:26:06.073409233Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20147431" Nov 6 05:26:06.074692 containerd[1594]: time="2025-11-06T05:26:06.074635733Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:06.077382 containerd[1594]: time="2025-11-06T05:26:06.077321299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:06.078390 containerd[1594]: time="2025-11-06T05:26:06.078354467Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.473923883s" Nov 6 05:26:06.078438 containerd[1594]: time="2025-11-06T05:26:06.078393250Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 6 05:26:06.078971 containerd[1594]: time="2025-11-06T05:26:06.078946137Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 6 05:26:07.474401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4111900396.mount: Deactivated successfully. Nov 6 05:26:07.731376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 05:26:07.733205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:26:08.308823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:26:08.313157 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 05:26:08.360063 kubelet[2112]: E1106 05:26:08.360005 2112 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 05:26:08.368196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 05:26:08.368405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 05:26:08.369553 systemd[1]: kubelet.service: Consumed 588ms CPU time, 108.6M memory peak. Nov 6 05:26:08.566885 containerd[1594]: time="2025-11-06T05:26:08.566774363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:08.567902 containerd[1594]: time="2025-11-06T05:26:08.567834020Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=20341046" Nov 6 05:26:08.568959 containerd[1594]: time="2025-11-06T05:26:08.568916790Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:08.570905 containerd[1594]: time="2025-11-06T05:26:08.570870685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:08.571383 containerd[1594]: time="2025-11-06T05:26:08.571344002Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.492356188s" Nov 6 05:26:08.571440 containerd[1594]: time="2025-11-06T05:26:08.571382695Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 6 05:26:08.572315 containerd[1594]: time="2025-11-06T05:26:08.572287452Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 6 05:26:09.201116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740638005.mount: Deactivated successfully. Nov 6 05:26:10.470068 containerd[1594]: time="2025-11-06T05:26:10.469996604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:10.470762 containerd[1594]: time="2025-11-06T05:26:10.470708419Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20580977" Nov 6 05:26:10.471887 containerd[1594]: time="2025-11-06T05:26:10.471839800Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:10.474739 containerd[1594]: time="2025-11-06T05:26:10.474706086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:10.475598 containerd[1594]: time="2025-11-06T05:26:10.475566068Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.903249151s" Nov 6 05:26:10.475642 containerd[1594]: time="2025-11-06T05:26:10.475597507Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 6 05:26:10.476570 containerd[1594]: time="2025-11-06T05:26:10.476538492Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 05:26:10.976256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564072907.mount: Deactivated successfully. Nov 6 05:26:10.981741 containerd[1594]: time="2025-11-06T05:26:10.981695301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 05:26:10.982432 containerd[1594]: time="2025-11-06T05:26:10.982409030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 05:26:10.983501 containerd[1594]: time="2025-11-06T05:26:10.983440144Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 05:26:10.985560 containerd[1594]: time="2025-11-06T05:26:10.985526787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 05:26:10.986157 containerd[1594]: time="2025-11-06T05:26:10.986110562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 509.534329ms" Nov 6 05:26:10.986157 containerd[1594]: time="2025-11-06T05:26:10.986153562Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 05:26:10.986692 containerd[1594]: time="2025-11-06T05:26:10.986644894Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 6 05:26:11.559998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3735175097.mount: Deactivated successfully. Nov 6 05:26:13.153849 containerd[1594]: time="2025-11-06T05:26:13.153776309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:13.155246 containerd[1594]: time="2025-11-06T05:26:13.155184790Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Nov 6 05:26:13.156504 containerd[1594]: time="2025-11-06T05:26:13.156435656Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:13.306801 containerd[1594]: time="2025-11-06T05:26:13.306675108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:13.307814 containerd[1594]: time="2025-11-06T05:26:13.307716341Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.321029479s" Nov 6 05:26:13.307977 containerd[1594]: time="2025-11-06T05:26:13.307787885Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 6 05:26:17.825790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:26:17.825974 systemd[1]: kubelet.service: Consumed 588ms CPU time, 108.6M memory peak. Nov 6 05:26:17.828331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:26:17.854776 systemd[1]: Reload requested from client PID 2264 ('systemctl') (unit session-7.scope)... Nov 6 05:26:17.854792 systemd[1]: Reloading... Nov 6 05:26:17.947545 zram_generator::config[2306]: No configuration found. Nov 6 05:26:18.216221 systemd[1]: Reloading finished in 361 ms. Nov 6 05:26:18.295261 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 05:26:18.295417 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 05:26:18.295965 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:26:18.296046 systemd[1]: kubelet.service: Consumed 239ms CPU time, 98.2M memory peak. Nov 6 05:26:18.299193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:26:18.580677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:26:18.591803 (kubelet)[2354]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 05:26:18.651684 kubelet[2354]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 05:26:18.651684 kubelet[2354]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 05:26:18.651684 kubelet[2354]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 05:26:18.652196 kubelet[2354]: I1106 05:26:18.651739 2354 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 05:26:19.517909 kubelet[2354]: I1106 05:26:19.517863 2354 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 05:26:19.517909 kubelet[2354]: I1106 05:26:19.517895 2354 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 05:26:19.518166 kubelet[2354]: I1106 05:26:19.518146 2354 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 05:26:19.553453 kubelet[2354]: E1106 05:26:19.553401 2354 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 05:26:19.554809 kubelet[2354]: I1106 05:26:19.554764 2354 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 05:26:19.565571 kubelet[2354]: I1106 05:26:19.565490 2354 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 05:26:19.571440 kubelet[2354]: I1106 05:26:19.571414 2354 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 05:26:19.571823 kubelet[2354]: I1106 05:26:19.571771 2354 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 05:26:19.572024 kubelet[2354]: I1106 05:26:19.571811 2354 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 05:26:19.572235 kubelet[2354]: I1106 05:26:19.572038 2354 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 05:26:19.572235 kubelet[2354]: I1106 05:26:19.572049 2354 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 05:26:19.572235 kubelet[2354]: I1106 05:26:19.572236 2354 state_mem.go:36] "Initialized new in-memory state store" Nov 6 05:26:19.573995 kubelet[2354]: I1106 05:26:19.573967 2354 kubelet.go:480] "Attempting to sync node with API server" Nov 6 05:26:19.573995 kubelet[2354]: I1106 05:26:19.573996 2354 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 05:26:19.574067 kubelet[2354]: I1106 05:26:19.574037 2354 kubelet.go:386] "Adding apiserver pod source" Nov 6 05:26:19.575492 kubelet[2354]: I1106 05:26:19.575336 2354 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 05:26:19.578288 kubelet[2354]: E1106 05:26:19.578243 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 05:26:19.578465 kubelet[2354]: E1106 05:26:19.578423 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 05:26:19.580156 kubelet[2354]: I1106 05:26:19.580118 2354 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 6 05:26:19.580691 kubelet[2354]: I1106 05:26:19.580669 2354 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 05:26:19.581515 kubelet[2354]: W1106 05:26:19.581497 2354 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 05:26:19.584692 kubelet[2354]: I1106 05:26:19.584665 2354 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 05:26:19.584747 kubelet[2354]: I1106 05:26:19.584727 2354 server.go:1289] "Started kubelet" Nov 6 05:26:19.586758 kubelet[2354]: I1106 05:26:19.586688 2354 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 05:26:19.588506 kubelet[2354]: I1106 05:26:19.588041 2354 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 05:26:19.588506 kubelet[2354]: I1106 05:26:19.588066 2354 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 05:26:19.588982 kubelet[2354]: I1106 05:26:19.588952 2354 server.go:317] "Adding debug handlers to kubelet server" Nov 6 05:26:19.590650 kubelet[2354]: I1106 05:26:19.590024 2354 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 05:26:19.590650 kubelet[2354]: I1106 05:26:19.588044 2354 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 05:26:19.591589 kubelet[2354]: I1106 05:26:19.591570 2354 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 05:26:19.591952 kubelet[2354]: E1106 05:26:19.591933 2354 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 05:26:19.592839 kubelet[2354]: I1106 05:26:19.592076 2354 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 05:26:19.592839 kubelet[2354]: I1106 05:26:19.592206 2354 reconciler.go:26] "Reconciler: start to sync state" Nov 6 05:26:19.592839 kubelet[2354]: E1106 05:26:19.588755 2354 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.73:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187553a0dee2b100 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 05:26:19.584688384 +0000 UTC m=+0.986633731,LastTimestamp:2025-11-06 05:26:19.584688384 +0000 UTC m=+0.986633731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 05:26:19.592839 kubelet[2354]: E1106 05:26:19.592789 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 05:26:19.593716 kubelet[2354]: I1106 05:26:19.593696 2354 factory.go:223] Registration of the systemd container factory successfully Nov 6 05:26:19.593804 kubelet[2354]: I1106 05:26:19.593782 2354 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 05:26:19.594748 kubelet[2354]: E1106 05:26:19.594730 2354 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 05:26:19.595320 kubelet[2354]: E1106 05:26:19.595279 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="200ms" Nov 6 05:26:19.595439 kubelet[2354]: I1106 05:26:19.595424 2354 factory.go:223] Registration of the containerd container factory successfully Nov 6 05:26:19.612397 kubelet[2354]: I1106 05:26:19.612363 2354 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 05:26:19.612539 kubelet[2354]: I1106 05:26:19.612406 2354 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 05:26:19.612539 kubelet[2354]: I1106 05:26:19.612431 2354 state_mem.go:36] "Initialized new in-memory state store" Nov 6 05:26:19.614192 kubelet[2354]: I1106 05:26:19.614154 2354 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 05:26:19.615436 kubelet[2354]: I1106 05:26:19.615418 2354 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 05:26:19.615511 kubelet[2354]: I1106 05:26:19.615461 2354 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 05:26:19.615717 kubelet[2354]: I1106 05:26:19.615662 2354 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 05:26:19.615717 kubelet[2354]: I1106 05:26:19.615681 2354 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 05:26:19.615846 kubelet[2354]: E1106 05:26:19.615727 2354 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 05:26:19.616196 kubelet[2354]: E1106 05:26:19.616176 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 05:26:19.692816 kubelet[2354]: E1106 05:26:19.692779 2354 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 05:26:19.716161 kubelet[2354]: E1106 05:26:19.716093 2354 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 05:26:19.793537 kubelet[2354]: E1106 05:26:19.793453 2354 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 05:26:19.796466 kubelet[2354]: E1106 05:26:19.796036 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="400ms" Nov 6 05:26:19.893912 kubelet[2354]: E1106 05:26:19.893869 2354 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 05:26:19.905713 kubelet[2354]: I1106 05:26:19.905678 2354 policy_none.go:49] "None policy: Start" Nov 6 05:26:19.905713 kubelet[2354]: I1106 05:26:19.905712 2354 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 05:26:19.905779 kubelet[2354]: I1106 05:26:19.905729 2354 state_mem.go:35] "Initializing new in-memory state store" Nov 6 05:26:19.912833 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 05:26:19.916254 kubelet[2354]: E1106 05:26:19.916218 2354 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 05:26:19.926884 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 05:26:19.930129 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 05:26:19.945619 kubelet[2354]: E1106 05:26:19.945580 2354 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 05:26:19.945920 kubelet[2354]: I1106 05:26:19.945894 2354 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 05:26:19.946126 kubelet[2354]: I1106 05:26:19.945927 2354 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 05:26:19.946265 kubelet[2354]: I1106 05:26:19.946231 2354 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 05:26:19.947161 kubelet[2354]: E1106 05:26:19.947110 2354 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 05:26:19.947161 kubelet[2354]: E1106 05:26:19.947164 2354 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 6 05:26:20.048518 kubelet[2354]: I1106 05:26:20.048300 2354 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 05:26:20.048966 kubelet[2354]: E1106 05:26:20.048910 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Nov 6 05:26:20.196811 kubelet[2354]: E1106 05:26:20.196749 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="800ms" Nov 6 05:26:20.251282 kubelet[2354]: I1106 05:26:20.251222 2354 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 05:26:20.251652 kubelet[2354]: E1106 05:26:20.251606 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Nov 6 05:26:20.330380 systemd[1]: Created slice kubepods-burstable-pod4b6dfa5badab08ee8005491ead468a05.slice - libcontainer container kubepods-burstable-pod4b6dfa5badab08ee8005491ead468a05.slice. Nov 6 05:26:20.348793 kubelet[2354]: E1106 05:26:20.348750 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:26:20.354651 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 6 05:26:20.356792 kubelet[2354]: E1106 05:26:20.356767 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:26:20.358864 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 6 05:26:20.360744 kubelet[2354]: E1106 05:26:20.360702 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:26:20.397184 kubelet[2354]: I1106 05:26:20.397143 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:26:20.397184 kubelet[2354]: I1106 05:26:20.397184 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:26:20.397360 kubelet[2354]: I1106 05:26:20.397211 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 6 05:26:20.397360 kubelet[2354]: I1106 05:26:20.397249 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b6dfa5badab08ee8005491ead468a05-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b6dfa5badab08ee8005491ead468a05\") " pod="kube-system/kube-apiserver-localhost" Nov 6 05:26:20.397360 kubelet[2354]: I1106 05:26:20.397286 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:26:20.397360 kubelet[2354]: I1106 05:26:20.397320 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:26:20.397360 kubelet[2354]: I1106 05:26:20.397355 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b6dfa5badab08ee8005491ead468a05-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b6dfa5badab08ee8005491ead468a05\") " pod="kube-system/kube-apiserver-localhost" Nov 6 05:26:20.397504 kubelet[2354]: I1106 05:26:20.397393 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b6dfa5badab08ee8005491ead468a05-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b6dfa5badab08ee8005491ead468a05\") " pod="kube-system/kube-apiserver-localhost" Nov 6 05:26:20.397504 kubelet[2354]: I1106 05:26:20.397430 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:26:20.650305 kubelet[2354]: E1106 05:26:20.650140 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:20.651009 containerd[1594]: time="2025-11-06T05:26:20.650903358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b6dfa5badab08ee8005491ead468a05,Namespace:kube-system,Attempt:0,}" Nov 6 05:26:20.652954 kubelet[2354]: I1106 05:26:20.652903 2354 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 05:26:20.653300 kubelet[2354]: E1106 05:26:20.653272 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Nov 6 05:26:20.659504 kubelet[2354]: E1106 05:26:20.659462 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:20.659973 containerd[1594]: time="2025-11-06T05:26:20.659928092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 6 05:26:20.661045 kubelet[2354]: E1106 05:26:20.661025 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:20.661362 containerd[1594]: time="2025-11-06T05:26:20.661325363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 6 05:26:20.684117 containerd[1594]: time="2025-11-06T05:26:20.684061551Z" level=info msg="connecting to shim f4365a61548c489d83fe8d2bd6b2330ae8d8a2a16aa811fb1791b2dcae71617e" address="unix:///run/containerd/s/0375047d699501759ac782cf15dca4cd8a58558be08dde8233ee069b265b1d95" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:26:20.702261 containerd[1594]: time="2025-11-06T05:26:20.701849741Z" level=info msg="connecting to shim 03e89524a067c208c4b9153d79c2c96d4dbd9d16bb59aead53bc8ac3dc1dcf61" address="unix:///run/containerd/s/d792e88172204ae34ff95019deb47e495add7493909043eafb10be303e970004" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:26:20.703143 containerd[1594]: time="2025-11-06T05:26:20.703100917Z" level=info msg="connecting to shim de3a89b392ad20812c9ce9fc2a6d8b482dae203fc922f3a6c14ea6320c8891ef" address="unix:///run/containerd/s/9c989aa58227b395823802c8ec75bc49bd58aa212d75a77caada18acc5b11c63" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:26:20.743947 kubelet[2354]: E1106 05:26:20.743892 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 05:26:20.798629 systemd[1]: Started cri-containerd-f4365a61548c489d83fe8d2bd6b2330ae8d8a2a16aa811fb1791b2dcae71617e.scope - libcontainer container f4365a61548c489d83fe8d2bd6b2330ae8d8a2a16aa811fb1791b2dcae71617e. Nov 6 05:26:20.802255 systemd[1]: Started cri-containerd-de3a89b392ad20812c9ce9fc2a6d8b482dae203fc922f3a6c14ea6320c8891ef.scope - libcontainer container de3a89b392ad20812c9ce9fc2a6d8b482dae203fc922f3a6c14ea6320c8891ef. Nov 6 05:26:20.828678 systemd[1]: Started cri-containerd-03e89524a067c208c4b9153d79c2c96d4dbd9d16bb59aead53bc8ac3dc1dcf61.scope - libcontainer container 03e89524a067c208c4b9153d79c2c96d4dbd9d16bb59aead53bc8ac3dc1dcf61. Nov 6 05:26:20.877141 containerd[1594]: time="2025-11-06T05:26:20.877086622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"de3a89b392ad20812c9ce9fc2a6d8b482dae203fc922f3a6c14ea6320c8891ef\"" Nov 6 05:26:20.878760 kubelet[2354]: E1106 05:26:20.878732 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:20.884521 containerd[1594]: time="2025-11-06T05:26:20.884481330Z" level=info msg="CreateContainer within sandbox \"de3a89b392ad20812c9ce9fc2a6d8b482dae203fc922f3a6c14ea6320c8891ef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 05:26:20.886336 containerd[1594]: time="2025-11-06T05:26:20.886296123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b6dfa5badab08ee8005491ead468a05,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4365a61548c489d83fe8d2bd6b2330ae8d8a2a16aa811fb1791b2dcae71617e\"" Nov 6 05:26:20.887902 kubelet[2354]: E1106 05:26:20.887860 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:20.892735 containerd[1594]: time="2025-11-06T05:26:20.892708088Z" level=info msg="CreateContainer within sandbox \"f4365a61548c489d83fe8d2bd6b2330ae8d8a2a16aa811fb1791b2dcae71617e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 05:26:20.897533 containerd[1594]: time="2025-11-06T05:26:20.897438028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"03e89524a067c208c4b9153d79c2c96d4dbd9d16bb59aead53bc8ac3dc1dcf61\"" Nov 6 05:26:20.898083 containerd[1594]: time="2025-11-06T05:26:20.898007156Z" level=info msg="Container da623ce8100df269b5b8903b76b7f4f1269cdcfbaa070794680604398bdfc02a: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:26:20.898412 kubelet[2354]: E1106 05:26:20.898383 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:20.903916 containerd[1594]: time="2025-11-06T05:26:20.903842229Z" level=info msg="CreateContainer within sandbox \"03e89524a067c208c4b9153d79c2c96d4dbd9d16bb59aead53bc8ac3dc1dcf61\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 05:26:20.909952 containerd[1594]: time="2025-11-06T05:26:20.909901973Z" level=info msg="Container 25218c305daaf8563581393a6a1f32baadc20bc52355952e51bfda4001835fa0: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:26:20.915118 containerd[1594]: time="2025-11-06T05:26:20.915065005Z" level=info msg="CreateContainer within sandbox \"de3a89b392ad20812c9ce9fc2a6d8b482dae203fc922f3a6c14ea6320c8891ef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"da623ce8100df269b5b8903b76b7f4f1269cdcfbaa070794680604398bdfc02a\"" Nov 6 05:26:20.916179 containerd[1594]: time="2025-11-06T05:26:20.916147195Z" level=info msg="StartContainer for \"da623ce8100df269b5b8903b76b7f4f1269cdcfbaa070794680604398bdfc02a\"" Nov 6 05:26:20.917350 containerd[1594]: time="2025-11-06T05:26:20.917316858Z" level=info msg="connecting to shim da623ce8100df269b5b8903b76b7f4f1269cdcfbaa070794680604398bdfc02a" address="unix:///run/containerd/s/9c989aa58227b395823802c8ec75bc49bd58aa212d75a77caada18acc5b11c63" protocol=ttrpc version=3 Nov 6 05:26:20.921886 containerd[1594]: time="2025-11-06T05:26:20.921851412Z" level=info msg="Container 81012da4794b205081fdfda7e636fcb188158b04cf319fe1e38d3f084c3523ca: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:26:20.925011 kubelet[2354]: E1106 05:26:20.924968 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 05:26:20.925970 containerd[1594]: time="2025-11-06T05:26:20.925930582Z" level=info msg="CreateContainer within sandbox \"f4365a61548c489d83fe8d2bd6b2330ae8d8a2a16aa811fb1791b2dcae71617e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"25218c305daaf8563581393a6a1f32baadc20bc52355952e51bfda4001835fa0\"" Nov 6 05:26:20.927827 containerd[1594]: time="2025-11-06T05:26:20.926642237Z" level=info msg="StartContainer for \"25218c305daaf8563581393a6a1f32baadc20bc52355952e51bfda4001835fa0\"" Nov 6 05:26:20.927827 containerd[1594]: time="2025-11-06T05:26:20.927731039Z" level=info msg="connecting to shim 25218c305daaf8563581393a6a1f32baadc20bc52355952e51bfda4001835fa0" address="unix:///run/containerd/s/0375047d699501759ac782cf15dca4cd8a58558be08dde8233ee069b265b1d95" protocol=ttrpc version=3 Nov 6 05:26:20.929287 containerd[1594]: time="2025-11-06T05:26:20.929262300Z" level=info msg="CreateContainer within sandbox \"03e89524a067c208c4b9153d79c2c96d4dbd9d16bb59aead53bc8ac3dc1dcf61\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"81012da4794b205081fdfda7e636fcb188158b04cf319fe1e38d3f084c3523ca\"" Nov 6 05:26:20.929815 containerd[1594]: time="2025-11-06T05:26:20.929779150Z" level=info msg="StartContainer for \"81012da4794b205081fdfda7e636fcb188158b04cf319fe1e38d3f084c3523ca\"" Nov 6 05:26:20.934596 containerd[1594]: time="2025-11-06T05:26:20.934575164Z" level=info msg="connecting to shim 81012da4794b205081fdfda7e636fcb188158b04cf319fe1e38d3f084c3523ca" address="unix:///run/containerd/s/d792e88172204ae34ff95019deb47e495add7493909043eafb10be303e970004" protocol=ttrpc version=3 Nov 6 05:26:20.938616 systemd[1]: Started cri-containerd-da623ce8100df269b5b8903b76b7f4f1269cdcfbaa070794680604398bdfc02a.scope - libcontainer container da623ce8100df269b5b8903b76b7f4f1269cdcfbaa070794680604398bdfc02a. Nov 6 05:26:20.955631 systemd[1]: Started cri-containerd-25218c305daaf8563581393a6a1f32baadc20bc52355952e51bfda4001835fa0.scope - libcontainer container 25218c305daaf8563581393a6a1f32baadc20bc52355952e51bfda4001835fa0. Nov 6 05:26:20.960348 systemd[1]: Started cri-containerd-81012da4794b205081fdfda7e636fcb188158b04cf319fe1e38d3f084c3523ca.scope - libcontainer container 81012da4794b205081fdfda7e636fcb188158b04cf319fe1e38d3f084c3523ca. Nov 6 05:26:20.978016 kubelet[2354]: E1106 05:26:20.977970 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 05:26:20.999641 kubelet[2354]: E1106 05:26:20.999601 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="1.6s" Nov 6 05:26:21.040824 containerd[1594]: time="2025-11-06T05:26:21.040749779Z" level=info msg="StartContainer for \"da623ce8100df269b5b8903b76b7f4f1269cdcfbaa070794680604398bdfc02a\" returns successfully" Nov 6 05:26:21.066285 containerd[1594]: time="2025-11-06T05:26:21.066233260Z" level=info msg="StartContainer for \"25218c305daaf8563581393a6a1f32baadc20bc52355952e51bfda4001835fa0\" returns successfully" Nov 6 05:26:21.072997 containerd[1594]: time="2025-11-06T05:26:21.072949685Z" level=info msg="StartContainer for \"81012da4794b205081fdfda7e636fcb188158b04cf319fe1e38d3f084c3523ca\" returns successfully" Nov 6 05:26:21.456347 kubelet[2354]: I1106 05:26:21.456273 2354 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 05:26:21.625792 kubelet[2354]: E1106 05:26:21.625744 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:26:21.625924 kubelet[2354]: E1106 05:26:21.625875 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:21.628379 kubelet[2354]: E1106 05:26:21.628349 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:26:21.628466 kubelet[2354]: E1106 05:26:21.628439 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:21.630873 kubelet[2354]: E1106 05:26:21.630848 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 05:26:21.632535 kubelet[2354]: E1106 05:26:21.632505 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:22.142512 kubelet[2354]: I1106 05:26:22.142098 2354 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 05:26:22.193235 kubelet[2354]: I1106 05:26:22.193155 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 05:26:22.211580 kubelet[2354]: E1106 05:26:22.211556 2354 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 6 05:26:22.211824 kubelet[2354]: I1106 05:26:22.211667 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 05:26:22.214242 kubelet[2354]: E1106 05:26:22.214223 2354 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 6 05:26:22.214370 kubelet[2354]: I1106 05:26:22.214358 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 05:26:22.217501 kubelet[2354]: E1106 05:26:22.217093 2354 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 6 05:26:22.578346 kubelet[2354]: I1106 05:26:22.578225 2354 apiserver.go:52] "Watching apiserver" Nov 6 05:26:22.593952 kubelet[2354]: I1106 05:26:22.593899 2354 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 05:26:22.631581 kubelet[2354]: I1106 05:26:22.631521 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 05:26:22.631745 kubelet[2354]: I1106 05:26:22.631717 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 05:26:22.634234 kubelet[2354]: E1106 05:26:22.633855 2354 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 6 05:26:22.634234 kubelet[2354]: E1106 05:26:22.633982 2354 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 6 05:26:22.634234 kubelet[2354]: E1106 05:26:22.634013 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:22.634234 kubelet[2354]: E1106 05:26:22.634149 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:24.039722 systemd[1]: Reload requested from client PID 2640 ('systemctl') (unit session-7.scope)... Nov 6 05:26:24.039738 systemd[1]: Reloading... Nov 6 05:26:24.119540 zram_generator::config[2689]: No configuration found. Nov 6 05:26:24.348796 systemd[1]: Reloading finished in 308 ms. Nov 6 05:26:24.378368 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:26:24.407864 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 05:26:24.408223 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:26:24.408283 systemd[1]: kubelet.service: Consumed 1.595s CPU time, 131.5M memory peak. Nov 6 05:26:24.410226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:26:24.664757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:26:24.675938 (kubelet)[2728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 05:26:24.735218 kubelet[2728]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 05:26:24.735218 kubelet[2728]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 05:26:24.735218 kubelet[2728]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 05:26:24.735876 kubelet[2728]: I1106 05:26:24.735260 2728 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 05:26:24.744835 kubelet[2728]: I1106 05:26:24.744788 2728 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 05:26:24.744835 kubelet[2728]: I1106 05:26:24.744821 2728 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 05:26:24.745234 kubelet[2728]: I1106 05:26:24.745138 2728 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 05:26:24.746879 kubelet[2728]: I1106 05:26:24.746584 2728 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 05:26:24.749334 kubelet[2728]: I1106 05:26:24.749280 2728 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 05:26:24.753172 kubelet[2728]: I1106 05:26:24.753144 2728 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 05:26:24.761144 kubelet[2728]: I1106 05:26:24.759645 2728 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 05:26:24.761144 kubelet[2728]: I1106 05:26:24.759966 2728 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 05:26:24.761144 kubelet[2728]: I1106 05:26:24.759999 2728 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 05:26:24.761144 kubelet[2728]: I1106 05:26:24.760342 2728 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 05:26:24.761452 kubelet[2728]: I1106 05:26:24.760352 2728 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 05:26:24.761452 kubelet[2728]: I1106 05:26:24.760406 2728 state_mem.go:36] "Initialized new in-memory state store" Nov 6 05:26:24.761452 kubelet[2728]: I1106 05:26:24.760683 2728 kubelet.go:480] "Attempting to sync node with API server" Nov 6 05:26:24.761452 kubelet[2728]: I1106 05:26:24.760704 2728 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 05:26:24.761452 kubelet[2728]: I1106 05:26:24.760741 2728 kubelet.go:386] "Adding apiserver pod source" Nov 6 05:26:24.761452 kubelet[2728]: I1106 05:26:24.760768 2728 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 05:26:24.762909 kubelet[2728]: I1106 05:26:24.762877 2728 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 6 05:26:24.763414 kubelet[2728]: I1106 05:26:24.763380 2728 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 05:26:24.767001 kubelet[2728]: I1106 05:26:24.766972 2728 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 05:26:24.767060 kubelet[2728]: I1106 05:26:24.767033 2728 server.go:1289] "Started kubelet" Nov 6 05:26:24.767420 kubelet[2728]: I1106 05:26:24.767357 2728 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 05:26:24.767907 kubelet[2728]: I1106 05:26:24.767835 2728 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 05:26:24.768323 kubelet[2728]: I1106 05:26:24.768306 2728 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 05:26:24.770595 kubelet[2728]: I1106 05:26:24.768860 2728 server.go:317] "Adding debug handlers to kubelet server" Nov 6 05:26:24.772293 kubelet[2728]: I1106 05:26:24.770041 2728 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 05:26:24.772878 kubelet[2728]: I1106 05:26:24.770185 2728 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 05:26:24.772974 kubelet[2728]: I1106 05:26:24.772954 2728 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 05:26:24.773378 kubelet[2728]: I1106 05:26:24.773347 2728 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 05:26:24.773595 kubelet[2728]: I1106 05:26:24.773573 2728 reconciler.go:26] "Reconciler: start to sync state" Nov 6 05:26:24.774219 kubelet[2728]: I1106 05:26:24.774191 2728 factory.go:223] Registration of the systemd container factory successfully Nov 6 05:26:24.774515 kubelet[2728]: I1106 05:26:24.774296 2728 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 05:26:24.774713 kubelet[2728]: E1106 05:26:24.774685 2728 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 05:26:24.775656 kubelet[2728]: I1106 05:26:24.775630 2728 factory.go:223] Registration of the containerd container factory successfully Nov 6 05:26:24.790707 kubelet[2728]: I1106 05:26:24.790630 2728 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 05:26:24.792341 kubelet[2728]: I1106 05:26:24.792307 2728 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 05:26:24.792781 kubelet[2728]: I1106 05:26:24.792382 2728 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 05:26:24.792915 kubelet[2728]: I1106 05:26:24.792801 2728 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 05:26:24.792915 kubelet[2728]: I1106 05:26:24.792815 2728 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 05:26:24.792915 kubelet[2728]: E1106 05:26:24.792867 2728 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 05:26:24.818310 kubelet[2728]: I1106 05:26:24.818270 2728 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 05:26:24.818310 kubelet[2728]: I1106 05:26:24.818290 2728 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 05:26:24.818310 kubelet[2728]: I1106 05:26:24.818312 2728 state_mem.go:36] "Initialized new in-memory state store" Nov 6 05:26:24.818549 kubelet[2728]: I1106 05:26:24.818463 2728 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 05:26:24.818549 kubelet[2728]: I1106 05:26:24.818521 2728 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 05:26:24.818549 kubelet[2728]: I1106 05:26:24.818544 2728 policy_none.go:49] "None policy: Start" Nov 6 05:26:24.818612 kubelet[2728]: I1106 05:26:24.818560 2728 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 05:26:24.818612 kubelet[2728]: I1106 05:26:24.818572 2728 state_mem.go:35] "Initializing new in-memory state store" Nov 6 05:26:24.818687 kubelet[2728]: I1106 05:26:24.818675 2728 state_mem.go:75] "Updated machine memory state" Nov 6 05:26:24.823000 kubelet[2728]: E1106 05:26:24.822954 2728 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 05:26:24.823202 kubelet[2728]: I1106 05:26:24.823185 2728 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 05:26:24.823234 kubelet[2728]: I1106 05:26:24.823205 2728 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 05:26:24.823507 kubelet[2728]: I1106 05:26:24.823466 2728 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 05:26:24.825014 kubelet[2728]: E1106 05:26:24.824985 2728 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 05:26:24.894412 kubelet[2728]: I1106 05:26:24.894327 2728 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 05:26:24.894829 kubelet[2728]: I1106 05:26:24.894771 2728 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 05:26:24.894829 kubelet[2728]: I1106 05:26:24.894810 2728 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 05:26:24.934408 kubelet[2728]: I1106 05:26:24.934172 2728 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 05:26:24.959926 kubelet[2728]: I1106 05:26:24.959897 2728 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 6 05:26:24.959991 kubelet[2728]: I1106 05:26:24.959979 2728 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 05:26:24.974604 kubelet[2728]: I1106 05:26:24.974566 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b6dfa5badab08ee8005491ead468a05-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b6dfa5badab08ee8005491ead468a05\") " pod="kube-system/kube-apiserver-localhost" Nov 6 05:26:24.974604 kubelet[2728]: I1106 05:26:24.974601 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b6dfa5badab08ee8005491ead468a05-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b6dfa5badab08ee8005491ead468a05\") " pod="kube-system/kube-apiserver-localhost" Nov 6 05:26:24.974717 kubelet[2728]: I1106 05:26:24.974621 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:26:24.974717 kubelet[2728]: I1106 05:26:24.974636 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:26:24.974717 kubelet[2728]: I1106 05:26:24.974652 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:26:24.974717 kubelet[2728]: I1106 05:26:24.974666 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:26:24.974814 kubelet[2728]: I1106 05:26:24.974724 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 6 05:26:24.974814 kubelet[2728]: I1106 05:26:24.974761 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b6dfa5badab08ee8005491ead468a05-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b6dfa5badab08ee8005491ead468a05\") " pod="kube-system/kube-apiserver-localhost" Nov 6 05:26:24.974814 kubelet[2728]: I1106 05:26:24.974784 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 05:26:25.258851 kubelet[2728]: E1106 05:26:25.258563 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:25.258851 kubelet[2728]: E1106 05:26:25.258617 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:25.258851 kubelet[2728]: E1106 05:26:25.258642 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:25.761929 kubelet[2728]: I1106 05:26:25.761872 2728 apiserver.go:52] "Watching apiserver" Nov 6 05:26:25.774538 kubelet[2728]: I1106 05:26:25.774460 2728 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 05:26:25.806629 kubelet[2728]: E1106 05:26:25.806340 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:25.806629 kubelet[2728]: E1106 05:26:25.806430 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:25.806629 kubelet[2728]: I1106 05:26:25.806457 2728 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 05:26:25.812204 kubelet[2728]: E1106 05:26:25.812176 2728 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 6 05:26:25.812460 kubelet[2728]: E1106 05:26:25.812413 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:25.829304 kubelet[2728]: I1106 05:26:25.829090 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8290638270000001 podStartE2EDuration="1.829063827s" podCreationTimestamp="2025-11-06 05:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:26:25.822780363 +0000 UTC m=+1.139451298" watchObservedRunningTime="2025-11-06 05:26:25.829063827 +0000 UTC m=+1.145734762" Nov 6 05:26:25.835395 kubelet[2728]: I1106 05:26:25.835324 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.835305853 podStartE2EDuration="1.835305853s" podCreationTimestamp="2025-11-06 05:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:26:25.829026527 +0000 UTC m=+1.145697462" watchObservedRunningTime="2025-11-06 05:26:25.835305853 +0000 UTC m=+1.151976788" Nov 6 05:26:25.842544 kubelet[2728]: I1106 05:26:25.842456 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.842438239 podStartE2EDuration="1.842438239s" podCreationTimestamp="2025-11-06 05:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:26:25.83558666 +0000 UTC m=+1.152257595" watchObservedRunningTime="2025-11-06 05:26:25.842438239 +0000 UTC m=+1.159109174" Nov 6 05:26:26.807947 kubelet[2728]: E1106 05:26:26.807903 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:26.808404 kubelet[2728]: E1106 05:26:26.808069 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:27.810335 kubelet[2728]: E1106 05:26:27.810284 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:27.988756 kubelet[2728]: E1106 05:26:27.988711 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:29.476027 kubelet[2728]: I1106 05:26:29.475989 2728 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 05:26:29.476561 containerd[1594]: time="2025-11-06T05:26:29.476505215Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 05:26:29.476817 kubelet[2728]: I1106 05:26:29.476697 2728 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 05:26:30.354494 systemd[1]: Created slice kubepods-besteffort-poda4169ac0_7772_4a8d_b286_f694e7304587.slice - libcontainer container kubepods-besteffort-poda4169ac0_7772_4a8d_b286_f694e7304587.slice. Nov 6 05:26:30.406000 kubelet[2728]: I1106 05:26:30.405936 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a4169ac0-7772-4a8d-b286-f694e7304587-kube-proxy\") pod \"kube-proxy-nwhbc\" (UID: \"a4169ac0-7772-4a8d-b286-f694e7304587\") " pod="kube-system/kube-proxy-nwhbc" Nov 6 05:26:30.406000 kubelet[2728]: I1106 05:26:30.405982 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4169ac0-7772-4a8d-b286-f694e7304587-xtables-lock\") pod \"kube-proxy-nwhbc\" (UID: \"a4169ac0-7772-4a8d-b286-f694e7304587\") " pod="kube-system/kube-proxy-nwhbc" Nov 6 05:26:30.406000 kubelet[2728]: I1106 05:26:30.406006 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4169ac0-7772-4a8d-b286-f694e7304587-lib-modules\") pod \"kube-proxy-nwhbc\" (UID: \"a4169ac0-7772-4a8d-b286-f694e7304587\") " pod="kube-system/kube-proxy-nwhbc" Nov 6 05:26:30.406263 kubelet[2728]: I1106 05:26:30.406026 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwqr7\" (UniqueName: \"kubernetes.io/projected/a4169ac0-7772-4a8d-b286-f694e7304587-kube-api-access-lwqr7\") pod \"kube-proxy-nwhbc\" (UID: \"a4169ac0-7772-4a8d-b286-f694e7304587\") " pod="kube-system/kube-proxy-nwhbc" Nov 6 05:26:30.663000 kubelet[2728]: E1106 05:26:30.662830 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:30.666627 containerd[1594]: time="2025-11-06T05:26:30.666564714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nwhbc,Uid:a4169ac0-7772-4a8d-b286-f694e7304587,Namespace:kube-system,Attempt:0,}" Nov 6 05:26:30.678972 systemd[1]: Created slice kubepods-besteffort-podb1c62403_013a_4326_8d1c_03df43cb1a32.slice - libcontainer container kubepods-besteffort-podb1c62403_013a_4326_8d1c_03df43cb1a32.slice. Nov 6 05:26:30.708630 kubelet[2728]: I1106 05:26:30.708593 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b1c62403-013a-4326-8d1c-03df43cb1a32-var-lib-calico\") pod \"tigera-operator-7dcd859c48-fcgnx\" (UID: \"b1c62403-013a-4326-8d1c-03df43cb1a32\") " pod="tigera-operator/tigera-operator-7dcd859c48-fcgnx" Nov 6 05:26:30.708630 kubelet[2728]: I1106 05:26:30.708627 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc2fp\" (UniqueName: \"kubernetes.io/projected/b1c62403-013a-4326-8d1c-03df43cb1a32-kube-api-access-wc2fp\") pod \"tigera-operator-7dcd859c48-fcgnx\" (UID: \"b1c62403-013a-4326-8d1c-03df43cb1a32\") " pod="tigera-operator/tigera-operator-7dcd859c48-fcgnx" Nov 6 05:26:30.721185 containerd[1594]: time="2025-11-06T05:26:30.721109062Z" level=info msg="connecting to shim 8ebfd3a6457ec43b0aa3d4e2b598e1d811db9f95c508969e63d7a2a9428df563" address="unix:///run/containerd/s/9b3bc608fad27694c51c622539d531a54116e7d2ca9d73590895900337689c07" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:26:30.784696 systemd[1]: Started cri-containerd-8ebfd3a6457ec43b0aa3d4e2b598e1d811db9f95c508969e63d7a2a9428df563.scope - libcontainer container 8ebfd3a6457ec43b0aa3d4e2b598e1d811db9f95c508969e63d7a2a9428df563. Nov 6 05:26:30.819095 containerd[1594]: time="2025-11-06T05:26:30.819056627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nwhbc,Uid:a4169ac0-7772-4a8d-b286-f694e7304587,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ebfd3a6457ec43b0aa3d4e2b598e1d811db9f95c508969e63d7a2a9428df563\"" Nov 6 05:26:30.820210 kubelet[2728]: E1106 05:26:30.820187 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:30.826146 containerd[1594]: time="2025-11-06T05:26:30.826083397Z" level=info msg="CreateContainer within sandbox \"8ebfd3a6457ec43b0aa3d4e2b598e1d811db9f95c508969e63d7a2a9428df563\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 05:26:30.839463 containerd[1594]: time="2025-11-06T05:26:30.839414976Z" level=info msg="Container e1f652e7311acdccf6b5ddb030203d74d47bcb22d3013671f65762e4010f836c: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:26:30.847929 containerd[1594]: time="2025-11-06T05:26:30.847890067Z" level=info msg="CreateContainer within sandbox \"8ebfd3a6457ec43b0aa3d4e2b598e1d811db9f95c508969e63d7a2a9428df563\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e1f652e7311acdccf6b5ddb030203d74d47bcb22d3013671f65762e4010f836c\"" Nov 6 05:26:30.848586 containerd[1594]: time="2025-11-06T05:26:30.848549618Z" level=info msg="StartContainer for \"e1f652e7311acdccf6b5ddb030203d74d47bcb22d3013671f65762e4010f836c\"" Nov 6 05:26:30.850106 containerd[1594]: time="2025-11-06T05:26:30.850072030Z" level=info msg="connecting to shim e1f652e7311acdccf6b5ddb030203d74d47bcb22d3013671f65762e4010f836c" address="unix:///run/containerd/s/9b3bc608fad27694c51c622539d531a54116e7d2ca9d73590895900337689c07" protocol=ttrpc version=3 Nov 6 05:26:30.870616 systemd[1]: Started cri-containerd-e1f652e7311acdccf6b5ddb030203d74d47bcb22d3013671f65762e4010f836c.scope - libcontainer container e1f652e7311acdccf6b5ddb030203d74d47bcb22d3013671f65762e4010f836c. Nov 6 05:26:30.915924 containerd[1594]: time="2025-11-06T05:26:30.915819507Z" level=info msg="StartContainer for \"e1f652e7311acdccf6b5ddb030203d74d47bcb22d3013671f65762e4010f836c\" returns successfully" Nov 6 05:26:30.983052 containerd[1594]: time="2025-11-06T05:26:30.983007838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-fcgnx,Uid:b1c62403-013a-4326-8d1c-03df43cb1a32,Namespace:tigera-operator,Attempt:0,}" Nov 6 05:26:31.005418 containerd[1594]: time="2025-11-06T05:26:31.005351436Z" level=info msg="connecting to shim 2dd3b95cb36ee733d9bcd5d83f49d8135f8719d32ac20a5816c21939ded61543" address="unix:///run/containerd/s/03d42ce865be7b5a6ee729c77043aec0d8075db88a749641953dddd49b7958cb" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:26:31.036643 systemd[1]: Started cri-containerd-2dd3b95cb36ee733d9bcd5d83f49d8135f8719d32ac20a5816c21939ded61543.scope - libcontainer container 2dd3b95cb36ee733d9bcd5d83f49d8135f8719d32ac20a5816c21939ded61543. Nov 6 05:26:31.084748 containerd[1594]: time="2025-11-06T05:26:31.084692217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-fcgnx,Uid:b1c62403-013a-4326-8d1c-03df43cb1a32,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2dd3b95cb36ee733d9bcd5d83f49d8135f8719d32ac20a5816c21939ded61543\"" Nov 6 05:26:31.087572 containerd[1594]: time="2025-11-06T05:26:31.087507374Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 6 05:26:31.519031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627340020.mount: Deactivated successfully. Nov 6 05:26:31.819179 kubelet[2728]: E1106 05:26:31.818963 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:31.827498 kubelet[2728]: I1106 05:26:31.827401 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nwhbc" podStartSLOduration=1.8273798079999999 podStartE2EDuration="1.827379808s" podCreationTimestamp="2025-11-06 05:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:26:31.82671563 +0000 UTC m=+7.143386565" watchObservedRunningTime="2025-11-06 05:26:31.827379808 +0000 UTC m=+7.144050733" Nov 6 05:26:32.257940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3424810645.mount: Deactivated successfully. Nov 6 05:26:33.008456 containerd[1594]: time="2025-11-06T05:26:33.008392679Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:33.009319 containerd[1594]: time="2025-11-06T05:26:33.009256405Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Nov 6 05:26:33.010623 containerd[1594]: time="2025-11-06T05:26:33.010589626Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:33.012792 containerd[1594]: time="2025-11-06T05:26:33.012745877Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:33.013336 containerd[1594]: time="2025-11-06T05:26:33.013303339Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.925662661s" Nov 6 05:26:33.013336 containerd[1594]: time="2025-11-06T05:26:33.013333317Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 6 05:26:33.017928 containerd[1594]: time="2025-11-06T05:26:33.017893539Z" level=info msg="CreateContainer within sandbox \"2dd3b95cb36ee733d9bcd5d83f49d8135f8719d32ac20a5816c21939ded61543\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 6 05:26:33.026612 containerd[1594]: time="2025-11-06T05:26:33.026556763Z" level=info msg="Container 936365fa75fd6090b21a4bae317c7ff9b859d3b51fcaa35859abf0d514da0e8f: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:26:33.030345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount324332914.mount: Deactivated successfully. Nov 6 05:26:33.033792 containerd[1594]: time="2025-11-06T05:26:33.033753551Z" level=info msg="CreateContainer within sandbox \"2dd3b95cb36ee733d9bcd5d83f49d8135f8719d32ac20a5816c21939ded61543\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"936365fa75fd6090b21a4bae317c7ff9b859d3b51fcaa35859abf0d514da0e8f\"" Nov 6 05:26:33.034272 containerd[1594]: time="2025-11-06T05:26:33.034240148Z" level=info msg="StartContainer for \"936365fa75fd6090b21a4bae317c7ff9b859d3b51fcaa35859abf0d514da0e8f\"" Nov 6 05:26:33.035151 containerd[1594]: time="2025-11-06T05:26:33.035123933Z" level=info msg="connecting to shim 936365fa75fd6090b21a4bae317c7ff9b859d3b51fcaa35859abf0d514da0e8f" address="unix:///run/containerd/s/03d42ce865be7b5a6ee729c77043aec0d8075db88a749641953dddd49b7958cb" protocol=ttrpc version=3 Nov 6 05:26:33.059614 systemd[1]: Started cri-containerd-936365fa75fd6090b21a4bae317c7ff9b859d3b51fcaa35859abf0d514da0e8f.scope - libcontainer container 936365fa75fd6090b21a4bae317c7ff9b859d3b51fcaa35859abf0d514da0e8f. Nov 6 05:26:33.093346 containerd[1594]: time="2025-11-06T05:26:33.093270147Z" level=info msg="StartContainer for \"936365fa75fd6090b21a4bae317c7ff9b859d3b51fcaa35859abf0d514da0e8f\" returns successfully" Nov 6 05:26:35.068043 systemd[1]: cri-containerd-936365fa75fd6090b21a4bae317c7ff9b859d3b51fcaa35859abf0d514da0e8f.scope: Deactivated successfully. Nov 6 05:26:35.071550 containerd[1594]: time="2025-11-06T05:26:35.071097091Z" level=info msg="received exit event container_id:\"936365fa75fd6090b21a4bae317c7ff9b859d3b51fcaa35859abf0d514da0e8f\" id:\"936365fa75fd6090b21a4bae317c7ff9b859d3b51fcaa35859abf0d514da0e8f\" pid:3065 exit_status:1 exited_at:{seconds:1762406795 nanos:69761831}" Nov 6 05:26:35.107289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-936365fa75fd6090b21a4bae317c7ff9b859d3b51fcaa35859abf0d514da0e8f-rootfs.mount: Deactivated successfully. Nov 6 05:26:35.435197 kubelet[2728]: E1106 05:26:35.435119 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:35.452363 kubelet[2728]: I1106 05:26:35.452261 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-fcgnx" podStartSLOduration=3.524227484 podStartE2EDuration="5.452237496s" podCreationTimestamp="2025-11-06 05:26:30 +0000 UTC" firstStartedPulling="2025-11-06 05:26:31.086113541 +0000 UTC m=+6.402784476" lastFinishedPulling="2025-11-06 05:26:33.014123553 +0000 UTC m=+8.330794488" observedRunningTime="2025-11-06 05:26:33.832620378 +0000 UTC m=+9.149291323" watchObservedRunningTime="2025-11-06 05:26:35.452237496 +0000 UTC m=+10.768908431" Nov 6 05:26:35.832105 kubelet[2728]: I1106 05:26:35.831991 2728 scope.go:117] "RemoveContainer" containerID="936365fa75fd6090b21a4bae317c7ff9b859d3b51fcaa35859abf0d514da0e8f" Nov 6 05:26:35.834452 containerd[1594]: time="2025-11-06T05:26:35.834299083Z" level=info msg="CreateContainer within sandbox \"2dd3b95cb36ee733d9bcd5d83f49d8135f8719d32ac20a5816c21939ded61543\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 6 05:26:35.847104 containerd[1594]: time="2025-11-06T05:26:35.846915124Z" level=info msg="Container a0cb78fc459dbbb45e81d596200fe80eb920a4e06d2f0b4348d8ab584996d374: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:26:35.853284 containerd[1594]: time="2025-11-06T05:26:35.853235587Z" level=info msg="CreateContainer within sandbox \"2dd3b95cb36ee733d9bcd5d83f49d8135f8719d32ac20a5816c21939ded61543\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a0cb78fc459dbbb45e81d596200fe80eb920a4e06d2f0b4348d8ab584996d374\"" Nov 6 05:26:35.853720 containerd[1594]: time="2025-11-06T05:26:35.853693690Z" level=info msg="StartContainer for \"a0cb78fc459dbbb45e81d596200fe80eb920a4e06d2f0b4348d8ab584996d374\"" Nov 6 05:26:35.854984 containerd[1594]: time="2025-11-06T05:26:35.854735902Z" level=info msg="connecting to shim a0cb78fc459dbbb45e81d596200fe80eb920a4e06d2f0b4348d8ab584996d374" address="unix:///run/containerd/s/03d42ce865be7b5a6ee729c77043aec0d8075db88a749641953dddd49b7958cb" protocol=ttrpc version=3 Nov 6 05:26:35.881652 systemd[1]: Started cri-containerd-a0cb78fc459dbbb45e81d596200fe80eb920a4e06d2f0b4348d8ab584996d374.scope - libcontainer container a0cb78fc459dbbb45e81d596200fe80eb920a4e06d2f0b4348d8ab584996d374. Nov 6 05:26:35.916054 containerd[1594]: time="2025-11-06T05:26:35.916008788Z" level=info msg="StartContainer for \"a0cb78fc459dbbb45e81d596200fe80eb920a4e06d2f0b4348d8ab584996d374\" returns successfully" Nov 6 05:26:36.512807 kubelet[2728]: E1106 05:26:36.512750 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:36.835224 kubelet[2728]: E1106 05:26:36.835021 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:37.992910 kubelet[2728]: E1106 05:26:37.992857 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:38.487270 sudo[1791]: pam_unix(sudo:session): session closed for user root Nov 6 05:26:38.488984 sshd[1790]: Connection closed by 10.0.0.1 port 54200 Nov 6 05:26:38.489596 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Nov 6 05:26:38.493704 systemd[1]: sshd@6-10.0.0.73:22-10.0.0.1:54200.service: Deactivated successfully. Nov 6 05:26:38.496138 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 05:26:38.496385 systemd[1]: session-7.scope: Consumed 7.342s CPU time, 214.3M memory peak. Nov 6 05:26:38.498484 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. Nov 6 05:26:38.500333 systemd-logind[1574]: Removed session 7. Nov 6 05:26:39.532500 update_engine[1577]: I20251106 05:26:39.531563 1577 update_attempter.cc:509] Updating boot flags... Nov 6 05:26:43.082571 systemd[1]: Created slice kubepods-besteffort-pod4b7ed648_2b67_4904_ab51_d7e79c70c41d.slice - libcontainer container kubepods-besteffort-pod4b7ed648_2b67_4904_ab51_d7e79c70c41d.slice. Nov 6 05:26:43.090294 kubelet[2728]: I1106 05:26:43.089258 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b7ed648-2b67-4904-ab51-d7e79c70c41d-tigera-ca-bundle\") pod \"calico-typha-7d7b8755f6-tpkm4\" (UID: \"4b7ed648-2b67-4904-ab51-d7e79c70c41d\") " pod="calico-system/calico-typha-7d7b8755f6-tpkm4" Nov 6 05:26:43.090975 kubelet[2728]: I1106 05:26:43.090906 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsxmr\" (UniqueName: \"kubernetes.io/projected/4b7ed648-2b67-4904-ab51-d7e79c70c41d-kube-api-access-tsxmr\") pod \"calico-typha-7d7b8755f6-tpkm4\" (UID: \"4b7ed648-2b67-4904-ab51-d7e79c70c41d\") " pod="calico-system/calico-typha-7d7b8755f6-tpkm4" Nov 6 05:26:43.090975 kubelet[2728]: I1106 05:26:43.090934 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4b7ed648-2b67-4904-ab51-d7e79c70c41d-typha-certs\") pod \"calico-typha-7d7b8755f6-tpkm4\" (UID: \"4b7ed648-2b67-4904-ab51-d7e79c70c41d\") " pod="calico-system/calico-typha-7d7b8755f6-tpkm4" Nov 6 05:26:43.271530 systemd[1]: Created slice kubepods-besteffort-poddf3e5ef6_515a_46cf_bf50_a84ab020a865.slice - libcontainer container kubepods-besteffort-poddf3e5ef6_515a_46cf_bf50_a84ab020a865.slice. Nov 6 05:26:43.292457 kubelet[2728]: I1106 05:26:43.292396 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/df3e5ef6-515a-46cf-bf50-a84ab020a865-cni-bin-dir\") pod \"calico-node-b7552\" (UID: \"df3e5ef6-515a-46cf-bf50-a84ab020a865\") " pod="calico-system/calico-node-b7552" Nov 6 05:26:43.292457 kubelet[2728]: I1106 05:26:43.292433 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/df3e5ef6-515a-46cf-bf50-a84ab020a865-var-lib-calico\") pod \"calico-node-b7552\" (UID: \"df3e5ef6-515a-46cf-bf50-a84ab020a865\") " pod="calico-system/calico-node-b7552" Nov 6 05:26:43.292457 kubelet[2728]: I1106 05:26:43.292449 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df3e5ef6-515a-46cf-bf50-a84ab020a865-lib-modules\") pod \"calico-node-b7552\" (UID: \"df3e5ef6-515a-46cf-bf50-a84ab020a865\") " pod="calico-system/calico-node-b7552" Nov 6 05:26:43.292457 kubelet[2728]: I1106 05:26:43.292463 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df3e5ef6-515a-46cf-bf50-a84ab020a865-xtables-lock\") pod \"calico-node-b7552\" (UID: \"df3e5ef6-515a-46cf-bf50-a84ab020a865\") " pod="calico-system/calico-node-b7552" Nov 6 05:26:43.292717 kubelet[2728]: I1106 05:26:43.292492 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkmwp\" (UniqueName: \"kubernetes.io/projected/df3e5ef6-515a-46cf-bf50-a84ab020a865-kube-api-access-bkmwp\") pod \"calico-node-b7552\" (UID: \"df3e5ef6-515a-46cf-bf50-a84ab020a865\") " pod="calico-system/calico-node-b7552" Nov 6 05:26:43.292717 kubelet[2728]: I1106 05:26:43.292522 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/df3e5ef6-515a-46cf-bf50-a84ab020a865-cni-log-dir\") pod \"calico-node-b7552\" (UID: \"df3e5ef6-515a-46cf-bf50-a84ab020a865\") " pod="calico-system/calico-node-b7552" Nov 6 05:26:43.292717 kubelet[2728]: I1106 05:26:43.292536 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/df3e5ef6-515a-46cf-bf50-a84ab020a865-node-certs\") pod \"calico-node-b7552\" (UID: \"df3e5ef6-515a-46cf-bf50-a84ab020a865\") " pod="calico-system/calico-node-b7552" Nov 6 05:26:43.292717 kubelet[2728]: I1106 05:26:43.292555 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df3e5ef6-515a-46cf-bf50-a84ab020a865-tigera-ca-bundle\") pod \"calico-node-b7552\" (UID: \"df3e5ef6-515a-46cf-bf50-a84ab020a865\") " pod="calico-system/calico-node-b7552" Nov 6 05:26:43.292717 kubelet[2728]: I1106 05:26:43.292571 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/df3e5ef6-515a-46cf-bf50-a84ab020a865-policysync\") pod \"calico-node-b7552\" (UID: \"df3e5ef6-515a-46cf-bf50-a84ab020a865\") " pod="calico-system/calico-node-b7552" Nov 6 05:26:43.292847 kubelet[2728]: I1106 05:26:43.292585 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/df3e5ef6-515a-46cf-bf50-a84ab020a865-cni-net-dir\") pod \"calico-node-b7552\" (UID: \"df3e5ef6-515a-46cf-bf50-a84ab020a865\") " pod="calico-system/calico-node-b7552" Nov 6 05:26:43.292847 kubelet[2728]: I1106 05:26:43.292600 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/df3e5ef6-515a-46cf-bf50-a84ab020a865-var-run-calico\") pod \"calico-node-b7552\" (UID: \"df3e5ef6-515a-46cf-bf50-a84ab020a865\") " pod="calico-system/calico-node-b7552" Nov 6 05:26:43.292847 kubelet[2728]: I1106 05:26:43.292615 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/df3e5ef6-515a-46cf-bf50-a84ab020a865-flexvol-driver-host\") pod \"calico-node-b7552\" (UID: \"df3e5ef6-515a-46cf-bf50-a84ab020a865\") " pod="calico-system/calico-node-b7552" Nov 6 05:26:43.386329 kubelet[2728]: E1106 05:26:43.386204 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:43.386916 containerd[1594]: time="2025-11-06T05:26:43.386862109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d7b8755f6-tpkm4,Uid:4b7ed648-2b67-4904-ab51-d7e79c70c41d,Namespace:calico-system,Attempt:0,}" Nov 6 05:26:43.399467 kubelet[2728]: E1106 05:26:43.397688 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.399467 kubelet[2728]: W1106 05:26:43.397713 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.399467 kubelet[2728]: E1106 05:26:43.398418 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.399467 kubelet[2728]: E1106 05:26:43.398757 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.399467 kubelet[2728]: W1106 05:26:43.398767 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.399467 kubelet[2728]: E1106 05:26:43.398777 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.403886 kubelet[2728]: E1106 05:26:43.403828 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.403886 kubelet[2728]: W1106 05:26:43.403849 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.403886 kubelet[2728]: E1106 05:26:43.403873 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.414497 containerd[1594]: time="2025-11-06T05:26:43.414442617Z" level=info msg="connecting to shim 98ff859ac71a665eb8a8fc8f913ba7caf570281337c2139ddc13575c3627f0fc" address="unix:///run/containerd/s/ec794359518303dcb22c83f901806d854dafc457e2121958453f0dd883eb597c" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:26:43.448619 systemd[1]: Started cri-containerd-98ff859ac71a665eb8a8fc8f913ba7caf570281337c2139ddc13575c3627f0fc.scope - libcontainer container 98ff859ac71a665eb8a8fc8f913ba7caf570281337c2139ddc13575c3627f0fc. Nov 6 05:26:43.468610 kubelet[2728]: E1106 05:26:43.468543 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjsxf" podUID="a5b9a4e3-c326-4625-87d3-4243511ed604" Nov 6 05:26:43.483503 kubelet[2728]: E1106 05:26:43.482891 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.483503 kubelet[2728]: W1106 05:26:43.482913 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.483503 kubelet[2728]: E1106 05:26:43.482932 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.484760 kubelet[2728]: E1106 05:26:43.484728 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.484760 kubelet[2728]: W1106 05:26:43.484748 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.484760 kubelet[2728]: E1106 05:26:43.484759 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.485624 kubelet[2728]: E1106 05:26:43.485594 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.485624 kubelet[2728]: W1106 05:26:43.485612 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.485624 kubelet[2728]: E1106 05:26:43.485623 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.486022 kubelet[2728]: E1106 05:26:43.485997 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.486022 kubelet[2728]: W1106 05:26:43.486012 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.486022 kubelet[2728]: E1106 05:26:43.486022 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.489811 kubelet[2728]: E1106 05:26:43.489785 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.489811 kubelet[2728]: W1106 05:26:43.489802 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.489908 kubelet[2728]: E1106 05:26:43.489838 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.490058 kubelet[2728]: E1106 05:26:43.490034 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.490058 kubelet[2728]: W1106 05:26:43.490050 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.490113 kubelet[2728]: E1106 05:26:43.490061 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.490274 kubelet[2728]: E1106 05:26:43.490253 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.490274 kubelet[2728]: W1106 05:26:43.490267 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.490328 kubelet[2728]: E1106 05:26:43.490278 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.490486 kubelet[2728]: E1106 05:26:43.490451 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.490516 kubelet[2728]: W1106 05:26:43.490466 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.490516 kubelet[2728]: E1106 05:26:43.490506 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.490714 kubelet[2728]: E1106 05:26:43.490691 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.490714 kubelet[2728]: W1106 05:26:43.490707 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.490714 kubelet[2728]: E1106 05:26:43.490716 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.490922 kubelet[2728]: E1106 05:26:43.490899 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.490922 kubelet[2728]: W1106 05:26:43.490913 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.490922 kubelet[2728]: E1106 05:26:43.490922 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.491123 kubelet[2728]: E1106 05:26:43.491102 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.491123 kubelet[2728]: W1106 05:26:43.491116 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.491123 kubelet[2728]: E1106 05:26:43.491125 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.491331 kubelet[2728]: E1106 05:26:43.491312 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.491331 kubelet[2728]: W1106 05:26:43.491326 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.491383 kubelet[2728]: E1106 05:26:43.491335 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.491549 kubelet[2728]: E1106 05:26:43.491529 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.491549 kubelet[2728]: W1106 05:26:43.491544 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.491609 kubelet[2728]: E1106 05:26:43.491553 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.491772 kubelet[2728]: E1106 05:26:43.491750 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.491772 kubelet[2728]: W1106 05:26:43.491766 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.491826 kubelet[2728]: E1106 05:26:43.491775 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.491971 kubelet[2728]: E1106 05:26:43.491949 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.491971 kubelet[2728]: W1106 05:26:43.491964 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.491971 kubelet[2728]: E1106 05:26:43.491973 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.492172 kubelet[2728]: E1106 05:26:43.492143 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.492172 kubelet[2728]: W1106 05:26:43.492166 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.492232 kubelet[2728]: E1106 05:26:43.492176 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.492389 kubelet[2728]: E1106 05:26:43.492364 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.492389 kubelet[2728]: W1106 05:26:43.492378 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.492389 kubelet[2728]: E1106 05:26:43.492387 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.492607 kubelet[2728]: E1106 05:26:43.492581 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.492607 kubelet[2728]: W1106 05:26:43.492596 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.492607 kubelet[2728]: E1106 05:26:43.492606 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.493251 kubelet[2728]: E1106 05:26:43.493222 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.493251 kubelet[2728]: W1106 05:26:43.493240 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.493251 kubelet[2728]: E1106 05:26:43.493251 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.493644 kubelet[2728]: E1106 05:26:43.493619 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.493644 kubelet[2728]: W1106 05:26:43.493634 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.493731 kubelet[2728]: E1106 05:26:43.493666 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.494957 kubelet[2728]: E1106 05:26:43.494178 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.494957 kubelet[2728]: W1106 05:26:43.494950 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.495046 kubelet[2728]: E1106 05:26:43.494966 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.495124 kubelet[2728]: I1106 05:26:43.495094 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5b9a4e3-c326-4625-87d3-4243511ed604-socket-dir\") pod \"csi-node-driver-gjsxf\" (UID: \"a5b9a4e3-c326-4625-87d3-4243511ed604\") " pod="calico-system/csi-node-driver-gjsxf" Nov 6 05:26:43.495618 kubelet[2728]: E1106 05:26:43.495589 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.495618 kubelet[2728]: W1106 05:26:43.495607 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.495618 kubelet[2728]: E1106 05:26:43.495617 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.495884 kubelet[2728]: I1106 05:26:43.495740 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a5b9a4e3-c326-4625-87d3-4243511ed604-varrun\") pod \"csi-node-driver-gjsxf\" (UID: \"a5b9a4e3-c326-4625-87d3-4243511ed604\") " pod="calico-system/csi-node-driver-gjsxf" Nov 6 05:26:43.497926 kubelet[2728]: E1106 05:26:43.497890 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.497926 kubelet[2728]: W1106 05:26:43.497913 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.497926 kubelet[2728]: E1106 05:26:43.497923 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.498058 kubelet[2728]: I1106 05:26:43.497952 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a5b9a4e3-c326-4625-87d3-4243511ed604-kubelet-dir\") pod \"csi-node-driver-gjsxf\" (UID: \"a5b9a4e3-c326-4625-87d3-4243511ed604\") " pod="calico-system/csi-node-driver-gjsxf" Nov 6 05:26:43.499457 kubelet[2728]: E1106 05:26:43.499345 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.499457 kubelet[2728]: W1106 05:26:43.499393 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.499457 kubelet[2728]: E1106 05:26:43.499425 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.500656 kubelet[2728]: I1106 05:26:43.500611 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5b9a4e3-c326-4625-87d3-4243511ed604-registration-dir\") pod \"csi-node-driver-gjsxf\" (UID: \"a5b9a4e3-c326-4625-87d3-4243511ed604\") " pod="calico-system/csi-node-driver-gjsxf" Nov 6 05:26:43.501548 kubelet[2728]: E1106 05:26:43.501522 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.501548 kubelet[2728]: W1106 05:26:43.501544 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.501627 kubelet[2728]: E1106 05:26:43.501556 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.502498 kubelet[2728]: E1106 05:26:43.502439 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.502795 kubelet[2728]: W1106 05:26:43.502467 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.502795 kubelet[2728]: E1106 05:26:43.502772 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.503313 kubelet[2728]: E1106 05:26:43.503296 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.503393 kubelet[2728]: W1106 05:26:43.503372 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.503493 kubelet[2728]: E1106 05:26:43.503459 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.503810 kubelet[2728]: E1106 05:26:43.503797 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.503921 kubelet[2728]: W1106 05:26:43.503903 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.503986 kubelet[2728]: E1106 05:26:43.503975 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.504661 kubelet[2728]: I1106 05:26:43.504595 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2928\" (UniqueName: \"kubernetes.io/projected/a5b9a4e3-c326-4625-87d3-4243511ed604-kube-api-access-s2928\") pod \"csi-node-driver-gjsxf\" (UID: \"a5b9a4e3-c326-4625-87d3-4243511ed604\") " pod="calico-system/csi-node-driver-gjsxf" Nov 6 05:26:43.504784 kubelet[2728]: E1106 05:26:43.504769 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.504848 kubelet[2728]: W1106 05:26:43.504827 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.504919 kubelet[2728]: E1106 05:26:43.504907 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.505523 kubelet[2728]: E1106 05:26:43.505302 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.505523 kubelet[2728]: W1106 05:26:43.505319 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.505523 kubelet[2728]: E1106 05:26:43.505332 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.505640 kubelet[2728]: E1106 05:26:43.505632 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.505674 kubelet[2728]: W1106 05:26:43.505641 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.505674 kubelet[2728]: E1106 05:26:43.505651 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.505854 kubelet[2728]: E1106 05:26:43.505830 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.505854 kubelet[2728]: W1106 05:26:43.505845 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.505854 kubelet[2728]: E1106 05:26:43.505853 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.506075 kubelet[2728]: E1106 05:26:43.506050 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.506075 kubelet[2728]: W1106 05:26:43.506065 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.506075 kubelet[2728]: E1106 05:26:43.506073 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.506302 kubelet[2728]: E1106 05:26:43.506279 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.506302 kubelet[2728]: W1106 05:26:43.506294 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.506367 kubelet[2728]: E1106 05:26:43.506303 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.506571 kubelet[2728]: E1106 05:26:43.506546 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.506571 kubelet[2728]: W1106 05:26:43.506561 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.506571 kubelet[2728]: E1106 05:26:43.506569 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.518044 containerd[1594]: time="2025-11-06T05:26:43.517956003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d7b8755f6-tpkm4,Uid:4b7ed648-2b67-4904-ab51-d7e79c70c41d,Namespace:calico-system,Attempt:0,} returns sandbox id \"98ff859ac71a665eb8a8fc8f913ba7caf570281337c2139ddc13575c3627f0fc\"" Nov 6 05:26:43.519493 kubelet[2728]: E1106 05:26:43.519181 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:43.522012 containerd[1594]: time="2025-11-06T05:26:43.521686519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 6 05:26:43.576133 kubelet[2728]: E1106 05:26:43.576087 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:43.576830 containerd[1594]: time="2025-11-06T05:26:43.576773606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b7552,Uid:df3e5ef6-515a-46cf-bf50-a84ab020a865,Namespace:calico-system,Attempt:0,}" Nov 6 05:26:43.599128 containerd[1594]: time="2025-11-06T05:26:43.599081732Z" level=info msg="connecting to shim ce6b8e881802794152217784400a037495fbc509001a0ff3ca1c0ea8f3415117" address="unix:///run/containerd/s/f6c94aea75b1cf387bae8ae89cc7f114b130fbe656d7e918057ae2b300257fe0" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:26:43.606138 kubelet[2728]: E1106 05:26:43.606094 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.606138 kubelet[2728]: W1106 05:26:43.606117 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.606138 kubelet[2728]: E1106 05:26:43.606141 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.606406 kubelet[2728]: E1106 05:26:43.606391 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.606406 kubelet[2728]: W1106 05:26:43.606402 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.606469 kubelet[2728]: E1106 05:26:43.606414 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.606630 kubelet[2728]: E1106 05:26:43.606615 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.606630 kubelet[2728]: W1106 05:26:43.606625 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.606778 kubelet[2728]: E1106 05:26:43.606634 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.607037 kubelet[2728]: E1106 05:26:43.607008 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.607083 kubelet[2728]: W1106 05:26:43.607034 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.607083 kubelet[2728]: E1106 05:26:43.607059 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.607288 kubelet[2728]: E1106 05:26:43.607272 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.607288 kubelet[2728]: W1106 05:26:43.607283 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.607355 kubelet[2728]: E1106 05:26:43.607292 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.607529 kubelet[2728]: E1106 05:26:43.607512 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.607529 kubelet[2728]: W1106 05:26:43.607524 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.607589 kubelet[2728]: E1106 05:26:43.607534 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.607904 kubelet[2728]: E1106 05:26:43.607862 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.607904 kubelet[2728]: W1106 05:26:43.607898 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.608034 kubelet[2728]: E1106 05:26:43.607959 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.608279 kubelet[2728]: E1106 05:26:43.608253 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.608279 kubelet[2728]: W1106 05:26:43.608272 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.608334 kubelet[2728]: E1106 05:26:43.608300 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.608630 kubelet[2728]: E1106 05:26:43.608613 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.608630 kubelet[2728]: W1106 05:26:43.608626 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.608687 kubelet[2728]: E1106 05:26:43.608653 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.608897 kubelet[2728]: E1106 05:26:43.608880 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.608897 kubelet[2728]: W1106 05:26:43.608892 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.608954 kubelet[2728]: E1106 05:26:43.608900 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.609162 kubelet[2728]: E1106 05:26:43.609138 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.609162 kubelet[2728]: W1106 05:26:43.609158 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.609217 kubelet[2728]: E1106 05:26:43.609167 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.609509 kubelet[2728]: E1106 05:26:43.609458 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.609509 kubelet[2728]: W1106 05:26:43.609485 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.609509 kubelet[2728]: E1106 05:26:43.609495 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.609889 kubelet[2728]: E1106 05:26:43.609858 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.609889 kubelet[2728]: W1106 05:26:43.609872 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.609889 kubelet[2728]: E1106 05:26:43.609881 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.610153 kubelet[2728]: E1106 05:26:43.610125 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.610153 kubelet[2728]: W1106 05:26:43.610137 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.610206 kubelet[2728]: E1106 05:26:43.610157 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.610371 kubelet[2728]: E1106 05:26:43.610353 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.610371 kubelet[2728]: W1106 05:26:43.610365 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.610429 kubelet[2728]: E1106 05:26:43.610373 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.610644 kubelet[2728]: E1106 05:26:43.610627 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.610644 kubelet[2728]: W1106 05:26:43.610639 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.610696 kubelet[2728]: E1106 05:26:43.610648 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.610935 kubelet[2728]: E1106 05:26:43.610910 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.610935 kubelet[2728]: W1106 05:26:43.610923 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.610935 kubelet[2728]: E1106 05:26:43.610931 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.611249 kubelet[2728]: E1106 05:26:43.611228 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.611249 kubelet[2728]: W1106 05:26:43.611242 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.611249 kubelet[2728]: E1106 05:26:43.611252 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.611507 kubelet[2728]: E1106 05:26:43.611490 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.611507 kubelet[2728]: W1106 05:26:43.611504 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.611561 kubelet[2728]: E1106 05:26:43.611513 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.611921 kubelet[2728]: E1106 05:26:43.611895 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.611921 kubelet[2728]: W1106 05:26:43.611909 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.611921 kubelet[2728]: E1106 05:26:43.611918 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.612111 kubelet[2728]: E1106 05:26:43.612095 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.612111 kubelet[2728]: W1106 05:26:43.612108 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.612170 kubelet[2728]: E1106 05:26:43.612116 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.612378 kubelet[2728]: E1106 05:26:43.612359 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.612378 kubelet[2728]: W1106 05:26:43.612373 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.612428 kubelet[2728]: E1106 05:26:43.612382 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.612608 kubelet[2728]: E1106 05:26:43.612581 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.612608 kubelet[2728]: W1106 05:26:43.612591 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.612608 kubelet[2728]: E1106 05:26:43.612599 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.612831 kubelet[2728]: E1106 05:26:43.612813 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.612831 kubelet[2728]: W1106 05:26:43.612828 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.612884 kubelet[2728]: E1106 05:26:43.612836 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.613095 kubelet[2728]: E1106 05:26:43.613075 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.613095 kubelet[2728]: W1106 05:26:43.613087 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.613173 kubelet[2728]: E1106 05:26:43.613109 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.626742 kubelet[2728]: E1106 05:26:43.626715 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:43.626742 kubelet[2728]: W1106 05:26:43.626732 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:43.626836 kubelet[2728]: E1106 05:26:43.626749 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:43.632626 systemd[1]: Started cri-containerd-ce6b8e881802794152217784400a037495fbc509001a0ff3ca1c0ea8f3415117.scope - libcontainer container ce6b8e881802794152217784400a037495fbc509001a0ff3ca1c0ea8f3415117. Nov 6 05:26:43.662945 containerd[1594]: time="2025-11-06T05:26:43.662847174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b7552,Uid:df3e5ef6-515a-46cf-bf50-a84ab020a865,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce6b8e881802794152217784400a037495fbc509001a0ff3ca1c0ea8f3415117\"" Nov 6 05:26:43.663920 kubelet[2728]: E1106 05:26:43.663868 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:44.794103 kubelet[2728]: E1106 05:26:44.793993 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjsxf" podUID="a5b9a4e3-c326-4625-87d3-4243511ed604" Nov 6 05:26:44.916669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3447394725.mount: Deactivated successfully. Nov 6 05:26:46.707268 containerd[1594]: time="2025-11-06T05:26:46.707199451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:46.708142 containerd[1594]: time="2025-11-06T05:26:46.708106564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Nov 6 05:26:46.709679 containerd[1594]: time="2025-11-06T05:26:46.709627186Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:46.711727 containerd[1594]: time="2025-11-06T05:26:46.711681727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:46.712177 containerd[1594]: time="2025-11-06T05:26:46.712142948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.190426964s" Nov 6 05:26:46.712220 containerd[1594]: time="2025-11-06T05:26:46.712177723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 6 05:26:46.713205 containerd[1594]: time="2025-11-06T05:26:46.713135322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 6 05:26:46.726855 containerd[1594]: time="2025-11-06T05:26:46.726807794Z" level=info msg="CreateContainer within sandbox \"98ff859ac71a665eb8a8fc8f913ba7caf570281337c2139ddc13575c3627f0fc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 6 05:26:46.736346 containerd[1594]: time="2025-11-06T05:26:46.735537693Z" level=info msg="Container 8fe28fb3c016358e81bb39440b23f47b252e1ab01c656ff14e10048964a8965c: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:26:46.744301 containerd[1594]: time="2025-11-06T05:26:46.744265537Z" level=info msg="CreateContainer within sandbox \"98ff859ac71a665eb8a8fc8f913ba7caf570281337c2139ddc13575c3627f0fc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8fe28fb3c016358e81bb39440b23f47b252e1ab01c656ff14e10048964a8965c\"" Nov 6 05:26:46.746507 containerd[1594]: time="2025-11-06T05:26:46.744855281Z" level=info msg="StartContainer for \"8fe28fb3c016358e81bb39440b23f47b252e1ab01c656ff14e10048964a8965c\"" Nov 6 05:26:46.746507 containerd[1594]: time="2025-11-06T05:26:46.746049586Z" level=info msg="connecting to shim 8fe28fb3c016358e81bb39440b23f47b252e1ab01c656ff14e10048964a8965c" address="unix:///run/containerd/s/ec794359518303dcb22c83f901806d854dafc457e2121958453f0dd883eb597c" protocol=ttrpc version=3 Nov 6 05:26:46.768642 systemd[1]: Started cri-containerd-8fe28fb3c016358e81bb39440b23f47b252e1ab01c656ff14e10048964a8965c.scope - libcontainer container 8fe28fb3c016358e81bb39440b23f47b252e1ab01c656ff14e10048964a8965c. Nov 6 05:26:46.794286 kubelet[2728]: E1106 05:26:46.793883 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjsxf" podUID="a5b9a4e3-c326-4625-87d3-4243511ed604" Nov 6 05:26:46.825332 containerd[1594]: time="2025-11-06T05:26:46.825275216Z" level=info msg="StartContainer for \"8fe28fb3c016358e81bb39440b23f47b252e1ab01c656ff14e10048964a8965c\" returns successfully" Nov 6 05:26:46.860500 kubelet[2728]: E1106 05:26:46.860351 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:46.874681 kubelet[2728]: I1106 05:26:46.874563 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d7b8755f6-tpkm4" podStartSLOduration=0.683018438 podStartE2EDuration="3.874536794s" podCreationTimestamp="2025-11-06 05:26:43 +0000 UTC" firstStartedPulling="2025-11-06 05:26:43.521372495 +0000 UTC m=+18.838043430" lastFinishedPulling="2025-11-06 05:26:46.712890851 +0000 UTC m=+22.029561786" observedRunningTime="2025-11-06 05:26:46.874044254 +0000 UTC m=+22.190715189" watchObservedRunningTime="2025-11-06 05:26:46.874536794 +0000 UTC m=+22.191207729" Nov 6 05:26:46.914803 kubelet[2728]: E1106 05:26:46.914755 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.914803 kubelet[2728]: W1106 05:26:46.914788 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.914995 kubelet[2728]: E1106 05:26:46.914845 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.915132 kubelet[2728]: E1106 05:26:46.915109 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.915162 kubelet[2728]: W1106 05:26:46.915138 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.915162 kubelet[2728]: E1106 05:26:46.915150 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.915365 kubelet[2728]: E1106 05:26:46.915349 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.915365 kubelet[2728]: W1106 05:26:46.915361 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.915413 kubelet[2728]: E1106 05:26:46.915370 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.915723 kubelet[2728]: E1106 05:26:46.915705 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.915764 kubelet[2728]: W1106 05:26:46.915744 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.915764 kubelet[2728]: E1106 05:26:46.915757 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.915979 kubelet[2728]: E1106 05:26:46.915956 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.915979 kubelet[2728]: W1106 05:26:46.915968 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.916039 kubelet[2728]: E1106 05:26:46.915989 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.916241 kubelet[2728]: E1106 05:26:46.916224 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.916241 kubelet[2728]: W1106 05:26:46.916237 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.916301 kubelet[2728]: E1106 05:26:46.916247 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.916497 kubelet[2728]: E1106 05:26:46.916465 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.916497 kubelet[2728]: W1106 05:26:46.916494 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.916548 kubelet[2728]: E1106 05:26:46.916507 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.916740 kubelet[2728]: E1106 05:26:46.916724 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.916740 kubelet[2728]: W1106 05:26:46.916735 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.916818 kubelet[2728]: E1106 05:26:46.916745 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.917017 kubelet[2728]: E1106 05:26:46.916993 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.917017 kubelet[2728]: W1106 05:26:46.917013 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.917075 kubelet[2728]: E1106 05:26:46.917023 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.917258 kubelet[2728]: E1106 05:26:46.917241 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.917258 kubelet[2728]: W1106 05:26:46.917252 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.917334 kubelet[2728]: E1106 05:26:46.917261 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.917510 kubelet[2728]: E1106 05:26:46.917494 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.917545 kubelet[2728]: W1106 05:26:46.917516 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.917545 kubelet[2728]: E1106 05:26:46.917525 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.918347 kubelet[2728]: E1106 05:26:46.918326 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.918347 kubelet[2728]: W1106 05:26:46.918345 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.918437 kubelet[2728]: E1106 05:26:46.918356 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.918689 kubelet[2728]: E1106 05:26:46.918671 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.918689 kubelet[2728]: W1106 05:26:46.918684 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.918764 kubelet[2728]: E1106 05:26:46.918694 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.918941 kubelet[2728]: E1106 05:26:46.918924 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.918941 kubelet[2728]: W1106 05:26:46.918937 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.918941 kubelet[2728]: E1106 05:26:46.918945 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.919138 kubelet[2728]: E1106 05:26:46.919100 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.919138 kubelet[2728]: W1106 05:26:46.919119 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.919138 kubelet[2728]: E1106 05:26:46.919127 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.931566 kubelet[2728]: E1106 05:26:46.931529 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.931566 kubelet[2728]: W1106 05:26:46.931554 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.931752 kubelet[2728]: E1106 05:26:46.931578 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.932426 kubelet[2728]: E1106 05:26:46.932272 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.932426 kubelet[2728]: W1106 05:26:46.932307 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.932507 kubelet[2728]: E1106 05:26:46.932458 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.933812 kubelet[2728]: E1106 05:26:46.933789 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.933812 kubelet[2728]: W1106 05:26:46.933807 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.933907 kubelet[2728]: E1106 05:26:46.933820 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.934226 kubelet[2728]: E1106 05:26:46.934190 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.934226 kubelet[2728]: W1106 05:26:46.934204 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.934226 kubelet[2728]: E1106 05:26:46.934214 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.934638 kubelet[2728]: E1106 05:26:46.934618 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.934638 kubelet[2728]: W1106 05:26:46.934633 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.934724 kubelet[2728]: E1106 05:26:46.934644 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.935071 kubelet[2728]: E1106 05:26:46.935044 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.935071 kubelet[2728]: W1106 05:26:46.935058 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.935550 kubelet[2728]: E1106 05:26:46.935527 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.937292 kubelet[2728]: E1106 05:26:46.937271 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.937292 kubelet[2728]: W1106 05:26:46.937286 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.937292 kubelet[2728]: E1106 05:26:46.937298 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.937538 kubelet[2728]: E1106 05:26:46.937520 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.937538 kubelet[2728]: W1106 05:26:46.937533 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.937612 kubelet[2728]: E1106 05:26:46.937541 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.937858 kubelet[2728]: E1106 05:26:46.937839 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.937858 kubelet[2728]: W1106 05:26:46.937853 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.937931 kubelet[2728]: E1106 05:26:46.937864 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.938851 kubelet[2728]: E1106 05:26:46.938829 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.938851 kubelet[2728]: W1106 05:26:46.938845 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.938921 kubelet[2728]: E1106 05:26:46.938856 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.939158 kubelet[2728]: E1106 05:26:46.939130 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.939158 kubelet[2728]: W1106 05:26:46.939145 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.939158 kubelet[2728]: E1106 05:26:46.939155 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.939402 kubelet[2728]: E1106 05:26:46.939385 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.939402 kubelet[2728]: W1106 05:26:46.939397 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.939463 kubelet[2728]: E1106 05:26:46.939406 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.939662 kubelet[2728]: E1106 05:26:46.939645 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.939662 kubelet[2728]: W1106 05:26:46.939656 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.939735 kubelet[2728]: E1106 05:26:46.939664 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.939890 kubelet[2728]: E1106 05:26:46.939873 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.939890 kubelet[2728]: W1106 05:26:46.939884 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.939949 kubelet[2728]: E1106 05:26:46.939892 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.940168 kubelet[2728]: E1106 05:26:46.940150 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.940168 kubelet[2728]: W1106 05:26:46.940162 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.940231 kubelet[2728]: E1106 05:26:46.940172 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.941531 kubelet[2728]: E1106 05:26:46.940607 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.941531 kubelet[2728]: W1106 05:26:46.940622 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.941531 kubelet[2728]: E1106 05:26:46.940631 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.941531 kubelet[2728]: E1106 05:26:46.940967 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.941531 kubelet[2728]: W1106 05:26:46.940975 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.941531 kubelet[2728]: E1106 05:26:46.940984 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:46.941531 kubelet[2728]: E1106 05:26:46.941174 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:46.941531 kubelet[2728]: W1106 05:26:46.941182 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:46.941531 kubelet[2728]: E1106 05:26:46.941191 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.862006 kubelet[2728]: E1106 05:26:47.861964 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:47.923527 kubelet[2728]: E1106 05:26:47.923461 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.923527 kubelet[2728]: W1106 05:26:47.923497 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.923527 kubelet[2728]: E1106 05:26:47.923517 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.923767 kubelet[2728]: E1106 05:26:47.923690 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.923767 kubelet[2728]: W1106 05:26:47.923698 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.923767 kubelet[2728]: E1106 05:26:47.923706 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.923925 kubelet[2728]: E1106 05:26:47.923898 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.923925 kubelet[2728]: W1106 05:26:47.923911 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.923925 kubelet[2728]: E1106 05:26:47.923919 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.924117 kubelet[2728]: E1106 05:26:47.924084 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.924117 kubelet[2728]: W1106 05:26:47.924095 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.924117 kubelet[2728]: E1106 05:26:47.924115 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.924366 kubelet[2728]: E1106 05:26:47.924346 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.924366 kubelet[2728]: W1106 05:26:47.924357 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.924366 kubelet[2728]: E1106 05:26:47.924365 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.924573 kubelet[2728]: E1106 05:26:47.924543 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.924573 kubelet[2728]: W1106 05:26:47.924557 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.924573 kubelet[2728]: E1106 05:26:47.924567 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.924743 kubelet[2728]: E1106 05:26:47.924725 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.924743 kubelet[2728]: W1106 05:26:47.924736 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.924743 kubelet[2728]: E1106 05:26:47.924744 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.924929 kubelet[2728]: E1106 05:26:47.924903 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.924929 kubelet[2728]: W1106 05:26:47.924915 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.924929 kubelet[2728]: E1106 05:26:47.924923 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.925114 kubelet[2728]: E1106 05:26:47.925086 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.925114 kubelet[2728]: W1106 05:26:47.925097 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.925114 kubelet[2728]: E1106 05:26:47.925115 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.925287 kubelet[2728]: E1106 05:26:47.925270 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.925287 kubelet[2728]: W1106 05:26:47.925280 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.925287 kubelet[2728]: E1106 05:26:47.925288 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.925461 kubelet[2728]: E1106 05:26:47.925444 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.925461 kubelet[2728]: W1106 05:26:47.925454 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.925626 kubelet[2728]: E1106 05:26:47.925461 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.925652 kubelet[2728]: E1106 05:26:47.925639 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.925652 kubelet[2728]: W1106 05:26:47.925647 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.925694 kubelet[2728]: E1106 05:26:47.925655 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.925834 kubelet[2728]: E1106 05:26:47.925816 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.925834 kubelet[2728]: W1106 05:26:47.925826 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.925834 kubelet[2728]: E1106 05:26:47.925834 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.926018 kubelet[2728]: E1106 05:26:47.926000 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.926018 kubelet[2728]: W1106 05:26:47.926011 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.926067 kubelet[2728]: E1106 05:26:47.926019 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.926204 kubelet[2728]: E1106 05:26:47.926187 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.926204 kubelet[2728]: W1106 05:26:47.926197 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.926254 kubelet[2728]: E1106 05:26:47.926205 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.945713 kubelet[2728]: E1106 05:26:47.945687 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.945713 kubelet[2728]: W1106 05:26:47.945701 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.945713 kubelet[2728]: E1106 05:26:47.945711 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.945950 kubelet[2728]: E1106 05:26:47.945927 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.945950 kubelet[2728]: W1106 05:26:47.945943 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.946018 kubelet[2728]: E1106 05:26:47.945952 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.946283 kubelet[2728]: E1106 05:26:47.946238 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.946283 kubelet[2728]: W1106 05:26:47.946264 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.946283 kubelet[2728]: E1106 05:26:47.946293 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.946617 kubelet[2728]: E1106 05:26:47.946594 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.946617 kubelet[2728]: W1106 05:26:47.946608 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.946617 kubelet[2728]: E1106 05:26:47.946618 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.946892 kubelet[2728]: E1106 05:26:47.946860 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.946892 kubelet[2728]: W1106 05:26:47.946873 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.946892 kubelet[2728]: E1106 05:26:47.946882 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.947084 kubelet[2728]: E1106 05:26:47.947065 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.947084 kubelet[2728]: W1106 05:26:47.947075 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.947084 kubelet[2728]: E1106 05:26:47.947083 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.947345 kubelet[2728]: E1106 05:26:47.947324 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.947345 kubelet[2728]: W1106 05:26:47.947335 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.947345 kubelet[2728]: E1106 05:26:47.947344 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.947553 kubelet[2728]: E1106 05:26:47.947533 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.947553 kubelet[2728]: W1106 05:26:47.947544 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.947553 kubelet[2728]: E1106 05:26:47.947552 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.947769 kubelet[2728]: E1106 05:26:47.947749 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.947769 kubelet[2728]: W1106 05:26:47.947759 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.947769 kubelet[2728]: E1106 05:26:47.947768 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.947990 kubelet[2728]: E1106 05:26:47.947958 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.947990 kubelet[2728]: W1106 05:26:47.947969 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.947990 kubelet[2728]: E1106 05:26:47.947977 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.948193 kubelet[2728]: E1106 05:26:47.948173 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.948193 kubelet[2728]: W1106 05:26:47.948185 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.948193 kubelet[2728]: E1106 05:26:47.948195 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.948439 kubelet[2728]: E1106 05:26:47.948407 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.948439 kubelet[2728]: W1106 05:26:47.948419 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.948439 kubelet[2728]: E1106 05:26:47.948428 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.948765 kubelet[2728]: E1106 05:26:47.948742 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.948765 kubelet[2728]: W1106 05:26:47.948762 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.948814 kubelet[2728]: E1106 05:26:47.948783 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.948996 kubelet[2728]: E1106 05:26:47.948971 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.948996 kubelet[2728]: W1106 05:26:47.948982 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.948996 kubelet[2728]: E1106 05:26:47.948991 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.949192 kubelet[2728]: E1106 05:26:47.949177 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.949192 kubelet[2728]: W1106 05:26:47.949188 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.949242 kubelet[2728]: E1106 05:26:47.949197 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.949416 kubelet[2728]: E1106 05:26:47.949395 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.949416 kubelet[2728]: W1106 05:26:47.949407 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.949416 kubelet[2728]: E1106 05:26:47.949416 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.949648 kubelet[2728]: E1106 05:26:47.949635 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.949648 kubelet[2728]: W1106 05:26:47.949644 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.949702 kubelet[2728]: E1106 05:26:47.949652 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:47.950487 kubelet[2728]: E1106 05:26:47.950448 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:47.950487 kubelet[2728]: W1106 05:26:47.950460 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:47.950487 kubelet[2728]: E1106 05:26:47.950482 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.794164 kubelet[2728]: E1106 05:26:48.794102 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjsxf" podUID="a5b9a4e3-c326-4625-87d3-4243511ed604" Nov 6 05:26:48.863495 kubelet[2728]: E1106 05:26:48.863436 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:48.920770 containerd[1594]: time="2025-11-06T05:26:48.920709361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:48.921713 containerd[1594]: time="2025-11-06T05:26:48.921643584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Nov 6 05:26:48.923022 containerd[1594]: time="2025-11-06T05:26:48.922988811Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:48.925031 containerd[1594]: time="2025-11-06T05:26:48.924975611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:48.925506 containerd[1594]: time="2025-11-06T05:26:48.925456518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.212293264s" Nov 6 05:26:48.925575 containerd[1594]: time="2025-11-06T05:26:48.925506162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 6 05:26:48.929731 containerd[1594]: time="2025-11-06T05:26:48.929693303Z" level=info msg="CreateContainer within sandbox \"ce6b8e881802794152217784400a037495fbc509001a0ff3ca1c0ea8f3415117\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 6 05:26:48.932772 kubelet[2728]: E1106 05:26:48.932724 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.932772 kubelet[2728]: W1106 05:26:48.932743 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.932772 kubelet[2728]: E1106 05:26:48.932762 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.933055 kubelet[2728]: E1106 05:26:48.933032 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.933125 kubelet[2728]: W1106 05:26:48.933053 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.933125 kubelet[2728]: E1106 05:26:48.933078 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.933327 kubelet[2728]: E1106 05:26:48.933311 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.933327 kubelet[2728]: W1106 05:26:48.933322 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.933392 kubelet[2728]: E1106 05:26:48.933331 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.933555 kubelet[2728]: E1106 05:26:48.933540 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.933555 kubelet[2728]: W1106 05:26:48.933550 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.933614 kubelet[2728]: E1106 05:26:48.933559 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.933763 kubelet[2728]: E1106 05:26:48.933743 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.933763 kubelet[2728]: W1106 05:26:48.933754 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.933763 kubelet[2728]: E1106 05:26:48.933763 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.933968 kubelet[2728]: E1106 05:26:48.933953 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.933968 kubelet[2728]: W1106 05:26:48.933964 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.934037 kubelet[2728]: E1106 05:26:48.933972 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.934168 kubelet[2728]: E1106 05:26:48.934146 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.934168 kubelet[2728]: W1106 05:26:48.934157 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.934168 kubelet[2728]: E1106 05:26:48.934165 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.934394 kubelet[2728]: E1106 05:26:48.934359 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.934394 kubelet[2728]: W1106 05:26:48.934374 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.934394 kubelet[2728]: E1106 05:26:48.934386 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.934623 kubelet[2728]: E1106 05:26:48.934606 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.934623 kubelet[2728]: W1106 05:26:48.934618 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.934699 kubelet[2728]: E1106 05:26:48.934626 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.934913 kubelet[2728]: E1106 05:26:48.934869 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.934913 kubelet[2728]: W1106 05:26:48.934899 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.934984 kubelet[2728]: E1106 05:26:48.934933 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.935202 kubelet[2728]: E1106 05:26:48.935185 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.935202 kubelet[2728]: W1106 05:26:48.935197 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.935265 kubelet[2728]: E1106 05:26:48.935207 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.935421 kubelet[2728]: E1106 05:26:48.935390 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.935421 kubelet[2728]: W1106 05:26:48.935402 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.935421 kubelet[2728]: E1106 05:26:48.935411 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.935679 kubelet[2728]: E1106 05:26:48.935641 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.935679 kubelet[2728]: W1106 05:26:48.935651 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.935679 kubelet[2728]: E1106 05:26:48.935664 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.935889 kubelet[2728]: E1106 05:26:48.935867 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.935889 kubelet[2728]: W1106 05:26:48.935881 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.935889 kubelet[2728]: E1106 05:26:48.935892 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.936077 kubelet[2728]: E1106 05:26:48.936059 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.936077 kubelet[2728]: W1106 05:26:48.936068 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.936077 kubelet[2728]: E1106 05:26:48.936076 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.941144 containerd[1594]: time="2025-11-06T05:26:48.941104307Z" level=info msg="Container 94e6a1d4d1775bd7efe56f47f09bffb2a9df6265bc932a54061d5a055c16a8f7: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:26:48.950723 containerd[1594]: time="2025-11-06T05:26:48.950673284Z" level=info msg="CreateContainer within sandbox \"ce6b8e881802794152217784400a037495fbc509001a0ff3ca1c0ea8f3415117\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"94e6a1d4d1775bd7efe56f47f09bffb2a9df6265bc932a54061d5a055c16a8f7\"" Nov 6 05:26:48.951113 containerd[1594]: time="2025-11-06T05:26:48.951075823Z" level=info msg="StartContainer for \"94e6a1d4d1775bd7efe56f47f09bffb2a9df6265bc932a54061d5a055c16a8f7\"" Nov 6 05:26:48.952812 containerd[1594]: time="2025-11-06T05:26:48.952787333Z" level=info msg="connecting to shim 94e6a1d4d1775bd7efe56f47f09bffb2a9df6265bc932a54061d5a055c16a8f7" address="unix:///run/containerd/s/f6c94aea75b1cf387bae8ae89cc7f114b130fbe656d7e918057ae2b300257fe0" protocol=ttrpc version=3 Nov 6 05:26:48.953736 kubelet[2728]: E1106 05:26:48.953706 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.953736 kubelet[2728]: W1106 05:26:48.953735 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.953825 kubelet[2728]: E1106 05:26:48.953754 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.954117 kubelet[2728]: E1106 05:26:48.954080 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.954396 kubelet[2728]: W1106 05:26:48.954128 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.954396 kubelet[2728]: E1106 05:26:48.954141 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.954522 kubelet[2728]: E1106 05:26:48.954503 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.954522 kubelet[2728]: W1106 05:26:48.954516 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.954583 kubelet[2728]: E1106 05:26:48.954526 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.954777 kubelet[2728]: E1106 05:26:48.954761 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.954777 kubelet[2728]: W1106 05:26:48.954772 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.954840 kubelet[2728]: E1106 05:26:48.954782 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.955008 kubelet[2728]: E1106 05:26:48.954986 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.955069 kubelet[2728]: W1106 05:26:48.955031 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.955069 kubelet[2728]: E1106 05:26:48.955043 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.955299 kubelet[2728]: E1106 05:26:48.955272 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.955299 kubelet[2728]: W1106 05:26:48.955296 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.955531 kubelet[2728]: E1106 05:26:48.955307 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.955589 kubelet[2728]: E1106 05:26:48.955560 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.955589 kubelet[2728]: W1106 05:26:48.955569 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.955589 kubelet[2728]: E1106 05:26:48.955581 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.955777 kubelet[2728]: E1106 05:26:48.955760 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.955777 kubelet[2728]: W1106 05:26:48.955772 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.955866 kubelet[2728]: E1106 05:26:48.955780 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.956128 kubelet[2728]: E1106 05:26:48.956084 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.956128 kubelet[2728]: W1106 05:26:48.956116 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.956292 kubelet[2728]: E1106 05:26:48.956146 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.959489 kubelet[2728]: E1106 05:26:48.956895 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.959489 kubelet[2728]: W1106 05:26:48.956912 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.959489 kubelet[2728]: E1106 05:26:48.956921 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.959489 kubelet[2728]: E1106 05:26:48.957142 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.959489 kubelet[2728]: W1106 05:26:48.957151 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.959489 kubelet[2728]: E1106 05:26:48.957160 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.959489 kubelet[2728]: E1106 05:26:48.957344 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.959489 kubelet[2728]: W1106 05:26:48.957352 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.959489 kubelet[2728]: E1106 05:26:48.957359 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.959489 kubelet[2728]: E1106 05:26:48.957559 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.959753 kubelet[2728]: W1106 05:26:48.957574 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.959753 kubelet[2728]: E1106 05:26:48.957587 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.959753 kubelet[2728]: E1106 05:26:48.957858 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.959753 kubelet[2728]: W1106 05:26:48.957867 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.959753 kubelet[2728]: E1106 05:26:48.957876 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.959753 kubelet[2728]: E1106 05:26:48.958298 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.959753 kubelet[2728]: W1106 05:26:48.958307 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.959753 kubelet[2728]: E1106 05:26:48.958317 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.959753 kubelet[2728]: E1106 05:26:48.958534 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.959753 kubelet[2728]: W1106 05:26:48.958541 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.959960 kubelet[2728]: E1106 05:26:48.958549 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.959960 kubelet[2728]: E1106 05:26:48.958724 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.959960 kubelet[2728]: W1106 05:26:48.958732 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.959960 kubelet[2728]: E1106 05:26:48.958740 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.959960 kubelet[2728]: E1106 05:26:48.959075 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:26:48.959960 kubelet[2728]: W1106 05:26:48.959084 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:26:48.959960 kubelet[2728]: E1106 05:26:48.959104 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:26:48.983697 systemd[1]: Started cri-containerd-94e6a1d4d1775bd7efe56f47f09bffb2a9df6265bc932a54061d5a055c16a8f7.scope - libcontainer container 94e6a1d4d1775bd7efe56f47f09bffb2a9df6265bc932a54061d5a055c16a8f7. Nov 6 05:26:49.032737 containerd[1594]: time="2025-11-06T05:26:49.032691673Z" level=info msg="StartContainer for \"94e6a1d4d1775bd7efe56f47f09bffb2a9df6265bc932a54061d5a055c16a8f7\" returns successfully" Nov 6 05:26:49.043355 systemd[1]: cri-containerd-94e6a1d4d1775bd7efe56f47f09bffb2a9df6265bc932a54061d5a055c16a8f7.scope: Deactivated successfully. Nov 6 05:26:49.046319 containerd[1594]: time="2025-11-06T05:26:49.046126234Z" level=info msg="received exit event container_id:\"94e6a1d4d1775bd7efe56f47f09bffb2a9df6265bc932a54061d5a055c16a8f7\" id:\"94e6a1d4d1775bd7efe56f47f09bffb2a9df6265bc932a54061d5a055c16a8f7\" pid:3557 exited_at:{seconds:1762406809 nanos:45215376}" Nov 6 05:26:49.074256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94e6a1d4d1775bd7efe56f47f09bffb2a9df6265bc932a54061d5a055c16a8f7-rootfs.mount: Deactivated successfully. Nov 6 05:26:49.867510 kubelet[2728]: E1106 05:26:49.867100 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:49.867510 kubelet[2728]: E1106 05:26:49.867344 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:49.869842 containerd[1594]: time="2025-11-06T05:26:49.869757613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 6 05:26:50.794001 kubelet[2728]: E1106 05:26:50.793920 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjsxf" podUID="a5b9a4e3-c326-4625-87d3-4243511ed604" Nov 6 05:26:52.794280 kubelet[2728]: E1106 05:26:52.794219 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjsxf" podUID="a5b9a4e3-c326-4625-87d3-4243511ed604" Nov 6 05:26:52.987326 containerd[1594]: time="2025-11-06T05:26:52.987257790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:52.988267 containerd[1594]: time="2025-11-06T05:26:52.988219673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 6 05:26:52.989396 containerd[1594]: time="2025-11-06T05:26:52.989355563Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:52.991828 containerd[1594]: time="2025-11-06T05:26:52.991771916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:26:52.992387 containerd[1594]: time="2025-11-06T05:26:52.992340077Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.122542178s" Nov 6 05:26:52.992387 containerd[1594]: time="2025-11-06T05:26:52.992382657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 6 05:26:52.996146 containerd[1594]: time="2025-11-06T05:26:52.996094271Z" level=info msg="CreateContainer within sandbox \"ce6b8e881802794152217784400a037495fbc509001a0ff3ca1c0ea8f3415117\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 6 05:26:53.006303 containerd[1594]: time="2025-11-06T05:26:53.006237692Z" level=info msg="Container 9455e56e88e2e7e4b88ec1a110b172efc41cf2b4c4d10aca9eb627b0aa41b53d: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:26:53.017980 containerd[1594]: time="2025-11-06T05:26:53.017918877Z" level=info msg="CreateContainer within sandbox \"ce6b8e881802794152217784400a037495fbc509001a0ff3ca1c0ea8f3415117\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9455e56e88e2e7e4b88ec1a110b172efc41cf2b4c4d10aca9eb627b0aa41b53d\"" Nov 6 05:26:53.018617 containerd[1594]: time="2025-11-06T05:26:53.018566668Z" level=info msg="StartContainer for \"9455e56e88e2e7e4b88ec1a110b172efc41cf2b4c4d10aca9eb627b0aa41b53d\"" Nov 6 05:26:53.020062 containerd[1594]: time="2025-11-06T05:26:53.020026387Z" level=info msg="connecting to shim 9455e56e88e2e7e4b88ec1a110b172efc41cf2b4c4d10aca9eb627b0aa41b53d" address="unix:///run/containerd/s/f6c94aea75b1cf387bae8ae89cc7f114b130fbe656d7e918057ae2b300257fe0" protocol=ttrpc version=3 Nov 6 05:26:53.046623 systemd[1]: Started cri-containerd-9455e56e88e2e7e4b88ec1a110b172efc41cf2b4c4d10aca9eb627b0aa41b53d.scope - libcontainer container 9455e56e88e2e7e4b88ec1a110b172efc41cf2b4c4d10aca9eb627b0aa41b53d. Nov 6 05:26:53.093449 containerd[1594]: time="2025-11-06T05:26:53.093388960Z" level=info msg="StartContainer for \"9455e56e88e2e7e4b88ec1a110b172efc41cf2b4c4d10aca9eb627b0aa41b53d\" returns successfully" Nov 6 05:26:53.877537 kubelet[2728]: E1106 05:26:53.877457 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:54.116769 systemd[1]: cri-containerd-9455e56e88e2e7e4b88ec1a110b172efc41cf2b4c4d10aca9eb627b0aa41b53d.scope: Deactivated successfully. Nov 6 05:26:54.117203 systemd[1]: cri-containerd-9455e56e88e2e7e4b88ec1a110b172efc41cf2b4c4d10aca9eb627b0aa41b53d.scope: Consumed 671ms CPU time, 177.2M memory peak, 3.2M read from disk, 171.3M written to disk. Nov 6 05:26:54.123694 containerd[1594]: time="2025-11-06T05:26:54.123629285Z" level=info msg="received exit event container_id:\"9455e56e88e2e7e4b88ec1a110b172efc41cf2b4c4d10aca9eb627b0aa41b53d\" id:\"9455e56e88e2e7e4b88ec1a110b172efc41cf2b4c4d10aca9eb627b0aa41b53d\" pid:3616 exited_at:{seconds:1762406814 nanos:117560545}" Nov 6 05:26:54.156005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9455e56e88e2e7e4b88ec1a110b172efc41cf2b4c4d10aca9eb627b0aa41b53d-rootfs.mount: Deactivated successfully. Nov 6 05:26:54.182413 kubelet[2728]: I1106 05:26:54.182351 2728 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 05:26:54.276574 systemd[1]: Created slice kubepods-burstable-pod600f2205_e1d2_4e4b_bac1_c6ad7172351f.slice - libcontainer container kubepods-burstable-pod600f2205_e1d2_4e4b_bac1_c6ad7172351f.slice. Nov 6 05:26:54.285405 systemd[1]: Created slice kubepods-besteffort-pod0f73f8bc_c9f9_4d57_b6dd_3170e4d7175a.slice - libcontainer container kubepods-besteffort-pod0f73f8bc_c9f9_4d57_b6dd_3170e4d7175a.slice. Nov 6 05:26:54.289797 kubelet[2728]: I1106 05:26:54.289739 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z94n9\" (UniqueName: \"kubernetes.io/projected/0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a-kube-api-access-z94n9\") pod \"calico-apiserver-5fd5b89b5c-rs8qz\" (UID: \"0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a\") " pod="calico-apiserver/calico-apiserver-5fd5b89b5c-rs8qz" Nov 6 05:26:54.289797 kubelet[2728]: I1106 05:26:54.289780 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a-calico-apiserver-certs\") pod \"calico-apiserver-5fd5b89b5c-rs8qz\" (UID: \"0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a\") " pod="calico-apiserver/calico-apiserver-5fd5b89b5c-rs8qz" Nov 6 05:26:54.289797 kubelet[2728]: I1106 05:26:54.289804 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/600f2205-e1d2-4e4b-bac1-c6ad7172351f-config-volume\") pod \"coredns-674b8bbfcf-m5td6\" (UID: \"600f2205-e1d2-4e4b-bac1-c6ad7172351f\") " pod="kube-system/coredns-674b8bbfcf-m5td6" Nov 6 05:26:54.290046 kubelet[2728]: I1106 05:26:54.289822 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn2tz\" (UniqueName: \"kubernetes.io/projected/600f2205-e1d2-4e4b-bac1-c6ad7172351f-kube-api-access-kn2tz\") pod \"coredns-674b8bbfcf-m5td6\" (UID: \"600f2205-e1d2-4e4b-bac1-c6ad7172351f\") " pod="kube-system/coredns-674b8bbfcf-m5td6" Nov 6 05:26:54.294823 systemd[1]: Created slice kubepods-besteffort-pod08a93b65_3d61_4796_a494_3e40e8cba374.slice - libcontainer container kubepods-besteffort-pod08a93b65_3d61_4796_a494_3e40e8cba374.slice. Nov 6 05:26:54.302130 systemd[1]: Created slice kubepods-besteffort-pod01bbdde4_f3be_4868_9bd4_ca1075714d6d.slice - libcontainer container kubepods-besteffort-pod01bbdde4_f3be_4868_9bd4_ca1075714d6d.slice. Nov 6 05:26:54.310846 systemd[1]: Created slice kubepods-besteffort-pod01f13972_7d24_4634_a6cb_bffbc3b083bb.slice - libcontainer container kubepods-besteffort-pod01f13972_7d24_4634_a6cb_bffbc3b083bb.slice. Nov 6 05:26:54.319081 systemd[1]: Created slice kubepods-burstable-poda9bd2856_b08a_4413_8d50_2bcdbab25838.slice - libcontainer container kubepods-burstable-poda9bd2856_b08a_4413_8d50_2bcdbab25838.slice. Nov 6 05:26:54.326388 systemd[1]: Created slice kubepods-besteffort-podfbee8d7b_086c_45da_82fb_11a1baff350a.slice - libcontainer container kubepods-besteffort-podfbee8d7b_086c_45da_82fb_11a1baff350a.slice. Nov 6 05:26:54.390901 kubelet[2728]: I1106 05:26:54.390829 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9bd2856-b08a-4413-8d50-2bcdbab25838-config-volume\") pod \"coredns-674b8bbfcf-nwc4r\" (UID: \"a9bd2856-b08a-4413-8d50-2bcdbab25838\") " pod="kube-system/coredns-674b8bbfcf-nwc4r" Nov 6 05:26:54.390901 kubelet[2728]: I1106 05:26:54.390897 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/01bbdde4-f3be-4868-9bd4-ca1075714d6d-whisker-backend-key-pair\") pod \"whisker-5c4d5787db-c5dcd\" (UID: \"01bbdde4-f3be-4868-9bd4-ca1075714d6d\") " pod="calico-system/whisker-5c4d5787db-c5dcd" Nov 6 05:26:54.391097 kubelet[2728]: I1106 05:26:54.390946 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fbee8d7b-086c-45da-82fb-11a1baff350a-calico-apiserver-certs\") pod \"calico-apiserver-5fd5b89b5c-qh578\" (UID: \"fbee8d7b-086c-45da-82fb-11a1baff350a\") " pod="calico-apiserver/calico-apiserver-5fd5b89b5c-qh578" Nov 6 05:26:54.391097 kubelet[2728]: I1106 05:26:54.390972 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65m2m\" (UniqueName: \"kubernetes.io/projected/fbee8d7b-086c-45da-82fb-11a1baff350a-kube-api-access-65m2m\") pod \"calico-apiserver-5fd5b89b5c-qh578\" (UID: \"fbee8d7b-086c-45da-82fb-11a1baff350a\") " pod="calico-apiserver/calico-apiserver-5fd5b89b5c-qh578" Nov 6 05:26:54.391097 kubelet[2728]: I1106 05:26:54.391061 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpnkl\" (UniqueName: \"kubernetes.io/projected/01f13972-7d24-4634-a6cb-bffbc3b083bb-kube-api-access-wpnkl\") pod \"goldmane-666569f655-z8s46\" (UID: \"01f13972-7d24-4634-a6cb-bffbc3b083bb\") " pod="calico-system/goldmane-666569f655-z8s46" Nov 6 05:26:54.391097 kubelet[2728]: I1106 05:26:54.391093 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01bbdde4-f3be-4868-9bd4-ca1075714d6d-whisker-ca-bundle\") pod \"whisker-5c4d5787db-c5dcd\" (UID: \"01bbdde4-f3be-4868-9bd4-ca1075714d6d\") " pod="calico-system/whisker-5c4d5787db-c5dcd" Nov 6 05:26:54.391207 kubelet[2728]: I1106 05:26:54.391116 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08a93b65-3d61-4796-a494-3e40e8cba374-tigera-ca-bundle\") pod \"calico-kube-controllers-54b64844bb-tt8w9\" (UID: \"08a93b65-3d61-4796-a494-3e40e8cba374\") " pod="calico-system/calico-kube-controllers-54b64844bb-tt8w9" Nov 6 05:26:54.391207 kubelet[2728]: I1106 05:26:54.391140 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dq79\" (UniqueName: \"kubernetes.io/projected/a9bd2856-b08a-4413-8d50-2bcdbab25838-kube-api-access-9dq79\") pod \"coredns-674b8bbfcf-nwc4r\" (UID: \"a9bd2856-b08a-4413-8d50-2bcdbab25838\") " pod="kube-system/coredns-674b8bbfcf-nwc4r" Nov 6 05:26:54.391207 kubelet[2728]: I1106 05:26:54.391171 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25zzn\" (UniqueName: \"kubernetes.io/projected/01bbdde4-f3be-4868-9bd4-ca1075714d6d-kube-api-access-25zzn\") pod \"whisker-5c4d5787db-c5dcd\" (UID: \"01bbdde4-f3be-4868-9bd4-ca1075714d6d\") " pod="calico-system/whisker-5c4d5787db-c5dcd" Nov 6 05:26:54.391207 kubelet[2728]: I1106 05:26:54.391195 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f13972-7d24-4634-a6cb-bffbc3b083bb-config\") pod \"goldmane-666569f655-z8s46\" (UID: \"01f13972-7d24-4634-a6cb-bffbc3b083bb\") " pod="calico-system/goldmane-666569f655-z8s46" Nov 6 05:26:54.391306 kubelet[2728]: I1106 05:26:54.391218 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czv8z\" (UniqueName: \"kubernetes.io/projected/08a93b65-3d61-4796-a494-3e40e8cba374-kube-api-access-czv8z\") pod \"calico-kube-controllers-54b64844bb-tt8w9\" (UID: \"08a93b65-3d61-4796-a494-3e40e8cba374\") " pod="calico-system/calico-kube-controllers-54b64844bb-tt8w9" Nov 6 05:26:54.391306 kubelet[2728]: I1106 05:26:54.391267 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01f13972-7d24-4634-a6cb-bffbc3b083bb-goldmane-ca-bundle\") pod \"goldmane-666569f655-z8s46\" (UID: \"01f13972-7d24-4634-a6cb-bffbc3b083bb\") " pod="calico-system/goldmane-666569f655-z8s46" Nov 6 05:26:54.391306 kubelet[2728]: I1106 05:26:54.391298 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/01f13972-7d24-4634-a6cb-bffbc3b083bb-goldmane-key-pair\") pod \"goldmane-666569f655-z8s46\" (UID: \"01f13972-7d24-4634-a6cb-bffbc3b083bb\") " pod="calico-system/goldmane-666569f655-z8s46" Nov 6 05:26:54.582184 kubelet[2728]: E1106 05:26:54.582132 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:54.582848 containerd[1594]: time="2025-11-06T05:26:54.582790731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m5td6,Uid:600f2205-e1d2-4e4b-bac1-c6ad7172351f,Namespace:kube-system,Attempt:0,}" Nov 6 05:26:54.589797 containerd[1594]: time="2025-11-06T05:26:54.589748163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd5b89b5c-rs8qz,Uid:0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a,Namespace:calico-apiserver,Attempt:0,}" Nov 6 05:26:54.607778 containerd[1594]: time="2025-11-06T05:26:54.607716313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c4d5787db-c5dcd,Uid:01bbdde4-f3be-4868-9bd4-ca1075714d6d,Namespace:calico-system,Attempt:0,}" Nov 6 05:26:54.608261 containerd[1594]: time="2025-11-06T05:26:54.608174206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b64844bb-tt8w9,Uid:08a93b65-3d61-4796-a494-3e40e8cba374,Namespace:calico-system,Attempt:0,}" Nov 6 05:26:54.616116 containerd[1594]: time="2025-11-06T05:26:54.616082408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z8s46,Uid:01f13972-7d24-4634-a6cb-bffbc3b083bb,Namespace:calico-system,Attempt:0,}" Nov 6 05:26:54.623012 kubelet[2728]: E1106 05:26:54.622959 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:54.623820 containerd[1594]: time="2025-11-06T05:26:54.623764075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nwc4r,Uid:a9bd2856-b08a-4413-8d50-2bcdbab25838,Namespace:kube-system,Attempt:0,}" Nov 6 05:26:54.631924 containerd[1594]: time="2025-11-06T05:26:54.631888526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd5b89b5c-qh578,Uid:fbee8d7b-086c-45da-82fb-11a1baff350a,Namespace:calico-apiserver,Attempt:0,}" Nov 6 05:26:54.767370 containerd[1594]: time="2025-11-06T05:26:54.767197803Z" level=error msg="Failed to destroy network for sandbox \"56b63f9288ac6977f2d89951a3041618b3e0f580e70e1ff0c51dcf6a875f142d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.774769 containerd[1594]: time="2025-11-06T05:26:54.774716794Z" level=error msg="Failed to destroy network for sandbox \"8eb42a0c687a7a0c978eff9af6cdfdbe80c6ee00678bf8a59edbdeca4dbe4bac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.775245 containerd[1594]: time="2025-11-06T05:26:54.775215213Z" level=error msg="Failed to destroy network for sandbox \"49fcf5143a4794a9e2332de9dd8781c8be5a80f5b419f5755b1560b318e173d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.775677 containerd[1594]: time="2025-11-06T05:26:54.775526579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b64844bb-tt8w9,Uid:08a93b65-3d61-4796-a494-3e40e8cba374,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b63f9288ac6977f2d89951a3041618b3e0f580e70e1ff0c51dcf6a875f142d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.778934 kubelet[2728]: E1106 05:26:54.776105 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b63f9288ac6977f2d89951a3041618b3e0f580e70e1ff0c51dcf6a875f142d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.778934 kubelet[2728]: E1106 05:26:54.777827 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b63f9288ac6977f2d89951a3041618b3e0f580e70e1ff0c51dcf6a875f142d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54b64844bb-tt8w9" Nov 6 05:26:54.778934 kubelet[2728]: E1106 05:26:54.777858 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b63f9288ac6977f2d89951a3041618b3e0f580e70e1ff0c51dcf6a875f142d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54b64844bb-tt8w9" Nov 6 05:26:54.779095 kubelet[2728]: E1106 05:26:54.777925 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54b64844bb-tt8w9_calico-system(08a93b65-3d61-4796-a494-3e40e8cba374)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54b64844bb-tt8w9_calico-system(08a93b65-3d61-4796-a494-3e40e8cba374)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56b63f9288ac6977f2d89951a3041618b3e0f580e70e1ff0c51dcf6a875f142d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54b64844bb-tt8w9" podUID="08a93b65-3d61-4796-a494-3e40e8cba374" Nov 6 05:26:54.781152 containerd[1594]: time="2025-11-06T05:26:54.780630702Z" level=error msg="Failed to destroy network for sandbox \"22a38519ac42cd1121011c36905f07246e2827aea13bb421af83b510ff6fb91e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.785729 containerd[1594]: time="2025-11-06T05:26:54.785654793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd5b89b5c-rs8qz,Uid:0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb42a0c687a7a0c978eff9af6cdfdbe80c6ee00678bf8a59edbdeca4dbe4bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.786355 kubelet[2728]: E1106 05:26:54.786302 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb42a0c687a7a0c978eff9af6cdfdbe80c6ee00678bf8a59edbdeca4dbe4bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.787620 kubelet[2728]: E1106 05:26:54.787580 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb42a0c687a7a0c978eff9af6cdfdbe80c6ee00678bf8a59edbdeca4dbe4bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-rs8qz" Nov 6 05:26:54.787620 kubelet[2728]: E1106 05:26:54.787611 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb42a0c687a7a0c978eff9af6cdfdbe80c6ee00678bf8a59edbdeca4dbe4bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-rs8qz" Nov 6 05:26:54.787928 kubelet[2728]: E1106 05:26:54.787675 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fd5b89b5c-rs8qz_calico-apiserver(0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fd5b89b5c-rs8qz_calico-apiserver(0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8eb42a0c687a7a0c978eff9af6cdfdbe80c6ee00678bf8a59edbdeca4dbe4bac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-rs8qz" podUID="0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a" Nov 6 05:26:54.789078 containerd[1594]: time="2025-11-06T05:26:54.789011565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m5td6,Uid:600f2205-e1d2-4e4b-bac1-c6ad7172351f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a38519ac42cd1121011c36905f07246e2827aea13bb421af83b510ff6fb91e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.789500 containerd[1594]: time="2025-11-06T05:26:54.789223634Z" level=error msg="Failed to destroy network for sandbox \"588e9133ecd0a6ff1016cee6b12ac05637de5c24a29fed20f265d35490a49da0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.789554 kubelet[2728]: E1106 05:26:54.789406 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a38519ac42cd1121011c36905f07246e2827aea13bb421af83b510ff6fb91e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.789645 kubelet[2728]: E1106 05:26:54.789623 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a38519ac42cd1121011c36905f07246e2827aea13bb421af83b510ff6fb91e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m5td6" Nov 6 05:26:54.789713 kubelet[2728]: E1106 05:26:54.789698 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a38519ac42cd1121011c36905f07246e2827aea13bb421af83b510ff6fb91e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m5td6" Nov 6 05:26:54.789831 kubelet[2728]: E1106 05:26:54.789805 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-m5td6_kube-system(600f2205-e1d2-4e4b-bac1-c6ad7172351f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-m5td6_kube-system(600f2205-e1d2-4e4b-bac1-c6ad7172351f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22a38519ac42cd1121011c36905f07246e2827aea13bb421af83b510ff6fb91e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-m5td6" podUID="600f2205-e1d2-4e4b-bac1-c6ad7172351f" Nov 6 05:26:54.790211 containerd[1594]: time="2025-11-06T05:26:54.790174425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c4d5787db-c5dcd,Uid:01bbdde4-f3be-4868-9bd4-ca1075714d6d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"49fcf5143a4794a9e2332de9dd8781c8be5a80f5b419f5755b1560b318e173d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.790451 kubelet[2728]: E1106 05:26:54.790388 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49fcf5143a4794a9e2332de9dd8781c8be5a80f5b419f5755b1560b318e173d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.790523 kubelet[2728]: E1106 05:26:54.790446 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49fcf5143a4794a9e2332de9dd8781c8be5a80f5b419f5755b1560b318e173d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c4d5787db-c5dcd" Nov 6 05:26:54.790523 kubelet[2728]: E1106 05:26:54.790505 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49fcf5143a4794a9e2332de9dd8781c8be5a80f5b419f5755b1560b318e173d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c4d5787db-c5dcd" Nov 6 05:26:54.790703 kubelet[2728]: E1106 05:26:54.790670 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c4d5787db-c5dcd_calico-system(01bbdde4-f3be-4868-9bd4-ca1075714d6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c4d5787db-c5dcd_calico-system(01bbdde4-f3be-4868-9bd4-ca1075714d6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49fcf5143a4794a9e2332de9dd8781c8be5a80f5b419f5755b1560b318e173d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c4d5787db-c5dcd" podUID="01bbdde4-f3be-4868-9bd4-ca1075714d6d" Nov 6 05:26:54.792686 containerd[1594]: time="2025-11-06T05:26:54.792638916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z8s46,Uid:01f13972-7d24-4634-a6cb-bffbc3b083bb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"588e9133ecd0a6ff1016cee6b12ac05637de5c24a29fed20f265d35490a49da0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.795987 containerd[1594]: time="2025-11-06T05:26:54.795886362Z" level=error msg="Failed to destroy network for sandbox \"eb1c411e7707d08636dbe19207d04f7307038f18f0bc7a2604697a998e3a880a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.796949 kubelet[2728]: E1106 05:26:54.796902 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"588e9133ecd0a6ff1016cee6b12ac05637de5c24a29fed20f265d35490a49da0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.796997 kubelet[2728]: E1106 05:26:54.796948 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"588e9133ecd0a6ff1016cee6b12ac05637de5c24a29fed20f265d35490a49da0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-z8s46" Nov 6 05:26:54.796997 kubelet[2728]: E1106 05:26:54.796969 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"588e9133ecd0a6ff1016cee6b12ac05637de5c24a29fed20f265d35490a49da0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-z8s46" Nov 6 05:26:54.797066 kubelet[2728]: E1106 05:26:54.797022 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-z8s46_calico-system(01f13972-7d24-4634-a6cb-bffbc3b083bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-z8s46_calico-system(01f13972-7d24-4634-a6cb-bffbc3b083bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"588e9133ecd0a6ff1016cee6b12ac05637de5c24a29fed20f265d35490a49da0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-z8s46" podUID="01f13972-7d24-4634-a6cb-bffbc3b083bb" Nov 6 05:26:54.799132 containerd[1594]: time="2025-11-06T05:26:54.799076951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd5b89b5c-qh578,Uid:fbee8d7b-086c-45da-82fb-11a1baff350a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1c411e7707d08636dbe19207d04f7307038f18f0bc7a2604697a998e3a880a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.799554 kubelet[2728]: E1106 05:26:54.799485 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1c411e7707d08636dbe19207d04f7307038f18f0bc7a2604697a998e3a880a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.800208 kubelet[2728]: E1106 05:26:54.799564 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1c411e7707d08636dbe19207d04f7307038f18f0bc7a2604697a998e3a880a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-qh578" Nov 6 05:26:54.800208 kubelet[2728]: E1106 05:26:54.800168 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1c411e7707d08636dbe19207d04f7307038f18f0bc7a2604697a998e3a880a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-qh578" Nov 6 05:26:54.800285 kubelet[2728]: E1106 05:26:54.800239 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fd5b89b5c-qh578_calico-apiserver(fbee8d7b-086c-45da-82fb-11a1baff350a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fd5b89b5c-qh578_calico-apiserver(fbee8d7b-086c-45da-82fb-11a1baff350a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb1c411e7707d08636dbe19207d04f7307038f18f0bc7a2604697a998e3a880a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-qh578" podUID="fbee8d7b-086c-45da-82fb-11a1baff350a" Nov 6 05:26:54.803938 systemd[1]: Created slice kubepods-besteffort-poda5b9a4e3_c326_4625_87d3_4243511ed604.slice - libcontainer container kubepods-besteffort-poda5b9a4e3_c326_4625_87d3_4243511ed604.slice. Nov 6 05:26:54.806958 containerd[1594]: time="2025-11-06T05:26:54.806817709Z" level=error msg="Failed to destroy network for sandbox \"ec8a9833a6f536bea27964bcb2fedc8a2a82a43a7b9d6d0cd03ef1927a99b280\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.807158 containerd[1594]: time="2025-11-06T05:26:54.807104158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gjsxf,Uid:a5b9a4e3-c326-4625-87d3-4243511ed604,Namespace:calico-system,Attempt:0,}" Nov 6 05:26:54.810049 containerd[1594]: time="2025-11-06T05:26:54.809998149Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nwc4r,Uid:a9bd2856-b08a-4413-8d50-2bcdbab25838,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec8a9833a6f536bea27964bcb2fedc8a2a82a43a7b9d6d0cd03ef1927a99b280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.811443 kubelet[2728]: E1106 05:26:54.810435 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec8a9833a6f536bea27964bcb2fedc8a2a82a43a7b9d6d0cd03ef1927a99b280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.811443 kubelet[2728]: E1106 05:26:54.810652 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec8a9833a6f536bea27964bcb2fedc8a2a82a43a7b9d6d0cd03ef1927a99b280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nwc4r" Nov 6 05:26:54.811443 kubelet[2728]: E1106 05:26:54.810720 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec8a9833a6f536bea27964bcb2fedc8a2a82a43a7b9d6d0cd03ef1927a99b280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nwc4r" Nov 6 05:26:54.811627 kubelet[2728]: E1106 05:26:54.810796 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nwc4r_kube-system(a9bd2856-b08a-4413-8d50-2bcdbab25838)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nwc4r_kube-system(a9bd2856-b08a-4413-8d50-2bcdbab25838)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec8a9833a6f536bea27964bcb2fedc8a2a82a43a7b9d6d0cd03ef1927a99b280\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nwc4r" podUID="a9bd2856-b08a-4413-8d50-2bcdbab25838" Nov 6 05:26:54.859215 containerd[1594]: time="2025-11-06T05:26:54.859082419Z" level=error msg="Failed to destroy network for sandbox \"c918bf5db3e412ae2974b5621b619d768972dacaa43c888aa76ebe1250dfa279\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.861561 containerd[1594]: time="2025-11-06T05:26:54.861527654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gjsxf,Uid:a5b9a4e3-c326-4625-87d3-4243511ed604,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c918bf5db3e412ae2974b5621b619d768972dacaa43c888aa76ebe1250dfa279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.861830 kubelet[2728]: E1106 05:26:54.861772 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c918bf5db3e412ae2974b5621b619d768972dacaa43c888aa76ebe1250dfa279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:26:54.861972 kubelet[2728]: E1106 05:26:54.861844 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c918bf5db3e412ae2974b5621b619d768972dacaa43c888aa76ebe1250dfa279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gjsxf" Nov 6 05:26:54.861972 kubelet[2728]: E1106 05:26:54.861866 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c918bf5db3e412ae2974b5621b619d768972dacaa43c888aa76ebe1250dfa279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gjsxf" Nov 6 05:26:54.861972 kubelet[2728]: E1106 05:26:54.861924 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gjsxf_calico-system(a5b9a4e3-c326-4625-87d3-4243511ed604)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gjsxf_calico-system(a5b9a4e3-c326-4625-87d3-4243511ed604)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c918bf5db3e412ae2974b5621b619d768972dacaa43c888aa76ebe1250dfa279\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gjsxf" podUID="a5b9a4e3-c326-4625-87d3-4243511ed604" Nov 6 05:26:54.882557 kubelet[2728]: E1106 05:26:54.882524 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:26:54.883198 containerd[1594]: time="2025-11-06T05:26:54.883176665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 6 05:27:01.885563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4284010082.mount: Deactivated successfully. Nov 6 05:27:02.499256 containerd[1594]: time="2025-11-06T05:27:02.499173737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:27:02.500183 containerd[1594]: time="2025-11-06T05:27:02.500107433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 6 05:27:02.501290 containerd[1594]: time="2025-11-06T05:27:02.501245402Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:27:02.524982 containerd[1594]: time="2025-11-06T05:27:02.524897922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:27:02.525337 containerd[1594]: time="2025-11-06T05:27:02.525279428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.642065783s" Nov 6 05:27:02.525337 containerd[1594]: time="2025-11-06T05:27:02.525317320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 6 05:27:02.549825 containerd[1594]: time="2025-11-06T05:27:02.549781735Z" level=info msg="CreateContainer within sandbox \"ce6b8e881802794152217784400a037495fbc509001a0ff3ca1c0ea8f3415117\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 6 05:27:02.561204 containerd[1594]: time="2025-11-06T05:27:02.561142074Z" level=info msg="Container 7e08ba2c6bbb6fa52a2b30d4ed9e17f9b7f1d42b62fe0d658f292773d8886d43: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:27:02.573645 containerd[1594]: time="2025-11-06T05:27:02.573522059Z" level=info msg="CreateContainer within sandbox \"ce6b8e881802794152217784400a037495fbc509001a0ff3ca1c0ea8f3415117\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7e08ba2c6bbb6fa52a2b30d4ed9e17f9b7f1d42b62fe0d658f292773d8886d43\"" Nov 6 05:27:02.582237 containerd[1594]: time="2025-11-06T05:27:02.582151613Z" level=info msg="StartContainer for \"7e08ba2c6bbb6fa52a2b30d4ed9e17f9b7f1d42b62fe0d658f292773d8886d43\"" Nov 6 05:27:02.583692 containerd[1594]: time="2025-11-06T05:27:02.583663956Z" level=info msg="connecting to shim 7e08ba2c6bbb6fa52a2b30d4ed9e17f9b7f1d42b62fe0d658f292773d8886d43" address="unix:///run/containerd/s/f6c94aea75b1cf387bae8ae89cc7f114b130fbe656d7e918057ae2b300257fe0" protocol=ttrpc version=3 Nov 6 05:27:02.606620 systemd[1]: Started cri-containerd-7e08ba2c6bbb6fa52a2b30d4ed9e17f9b7f1d42b62fe0d658f292773d8886d43.scope - libcontainer container 7e08ba2c6bbb6fa52a2b30d4ed9e17f9b7f1d42b62fe0d658f292773d8886d43. Nov 6 05:27:02.660181 containerd[1594]: time="2025-11-06T05:27:02.660106786Z" level=info msg="StartContainer for \"7e08ba2c6bbb6fa52a2b30d4ed9e17f9b7f1d42b62fe0d658f292773d8886d43\" returns successfully" Nov 6 05:27:02.743369 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 6 05:27:02.744395 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 6 05:27:02.911441 kubelet[2728]: E1106 05:27:02.911319 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:02.951244 kubelet[2728]: I1106 05:27:02.951189 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01bbdde4-f3be-4868-9bd4-ca1075714d6d-whisker-ca-bundle\") pod \"01bbdde4-f3be-4868-9bd4-ca1075714d6d\" (UID: \"01bbdde4-f3be-4868-9bd4-ca1075714d6d\") " Nov 6 05:27:02.951244 kubelet[2728]: I1106 05:27:02.951234 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/01bbdde4-f3be-4868-9bd4-ca1075714d6d-whisker-backend-key-pair\") pod \"01bbdde4-f3be-4868-9bd4-ca1075714d6d\" (UID: \"01bbdde4-f3be-4868-9bd4-ca1075714d6d\") " Nov 6 05:27:02.951456 kubelet[2728]: I1106 05:27:02.951261 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25zzn\" (UniqueName: \"kubernetes.io/projected/01bbdde4-f3be-4868-9bd4-ca1075714d6d-kube-api-access-25zzn\") pod \"01bbdde4-f3be-4868-9bd4-ca1075714d6d\" (UID: \"01bbdde4-f3be-4868-9bd4-ca1075714d6d\") " Nov 6 05:27:02.952160 kubelet[2728]: I1106 05:27:02.952064 2728 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01bbdde4-f3be-4868-9bd4-ca1075714d6d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "01bbdde4-f3be-4868-9bd4-ca1075714d6d" (UID: "01bbdde4-f3be-4868-9bd4-ca1075714d6d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 05:27:02.955921 kubelet[2728]: I1106 05:27:02.955847 2728 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bbdde4-f3be-4868-9bd4-ca1075714d6d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "01bbdde4-f3be-4868-9bd4-ca1075714d6d" (UID: "01bbdde4-f3be-4868-9bd4-ca1075714d6d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 05:27:02.955921 kubelet[2728]: I1106 05:27:02.955882 2728 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01bbdde4-f3be-4868-9bd4-ca1075714d6d-kube-api-access-25zzn" (OuterVolumeSpecName: "kube-api-access-25zzn") pod "01bbdde4-f3be-4868-9bd4-ca1075714d6d" (UID: "01bbdde4-f3be-4868-9bd4-ca1075714d6d"). InnerVolumeSpecName "kube-api-access-25zzn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 05:27:02.957230 systemd[1]: var-lib-kubelet-pods-01bbdde4\x2df3be\x2d4868\x2d9bd4\x2dca1075714d6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d25zzn.mount: Deactivated successfully. Nov 6 05:27:02.957358 systemd[1]: var-lib-kubelet-pods-01bbdde4\x2df3be\x2d4868\x2d9bd4\x2dca1075714d6d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 6 05:27:03.052161 kubelet[2728]: I1106 05:27:03.052083 2728 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01bbdde4-f3be-4868-9bd4-ca1075714d6d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 6 05:27:03.052161 kubelet[2728]: I1106 05:27:03.052130 2728 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/01bbdde4-f3be-4868-9bd4-ca1075714d6d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 6 05:27:03.052161 kubelet[2728]: I1106 05:27:03.052140 2728 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-25zzn\" (UniqueName: \"kubernetes.io/projected/01bbdde4-f3be-4868-9bd4-ca1075714d6d-kube-api-access-25zzn\") on node \"localhost\" DevicePath \"\"" Nov 6 05:27:03.215578 systemd[1]: Removed slice kubepods-besteffort-pod01bbdde4_f3be_4868_9bd4_ca1075714d6d.slice - libcontainer container kubepods-besteffort-pod01bbdde4_f3be_4868_9bd4_ca1075714d6d.slice. Nov 6 05:27:03.252667 kubelet[2728]: I1106 05:27:03.252565 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-b7552" podStartSLOduration=1.390668971 podStartE2EDuration="20.252546413s" podCreationTimestamp="2025-11-06 05:26:43 +0000 UTC" firstStartedPulling="2025-11-06 05:26:43.664514338 +0000 UTC m=+18.981185263" lastFinishedPulling="2025-11-06 05:27:02.52639177 +0000 UTC m=+37.843062705" observedRunningTime="2025-11-06 05:27:03.25204463 +0000 UTC m=+38.568715575" watchObservedRunningTime="2025-11-06 05:27:03.252546413 +0000 UTC m=+38.569217348" Nov 6 05:27:03.911261 kubelet[2728]: E1106 05:27:03.911204 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:04.292813 systemd[1]: Created slice kubepods-besteffort-pod310afece_2da4_4aef_a7e0_89b082fc02bd.slice - libcontainer container kubepods-besteffort-pod310afece_2da4_4aef_a7e0_89b082fc02bd.slice. Nov 6 05:27:04.361216 kubelet[2728]: I1106 05:27:04.361147 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/310afece-2da4-4aef-a7e0-89b082fc02bd-whisker-backend-key-pair\") pod \"whisker-67b6bb984-wk547\" (UID: \"310afece-2da4-4aef-a7e0-89b082fc02bd\") " pod="calico-system/whisker-67b6bb984-wk547" Nov 6 05:27:04.361216 kubelet[2728]: I1106 05:27:04.361200 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/310afece-2da4-4aef-a7e0-89b082fc02bd-whisker-ca-bundle\") pod \"whisker-67b6bb984-wk547\" (UID: \"310afece-2da4-4aef-a7e0-89b082fc02bd\") " pod="calico-system/whisker-67b6bb984-wk547" Nov 6 05:27:04.361216 kubelet[2728]: I1106 05:27:04.361220 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8rss\" (UniqueName: \"kubernetes.io/projected/310afece-2da4-4aef-a7e0-89b082fc02bd-kube-api-access-b8rss\") pod \"whisker-67b6bb984-wk547\" (UID: \"310afece-2da4-4aef-a7e0-89b082fc02bd\") " pod="calico-system/whisker-67b6bb984-wk547" Nov 6 05:27:04.800169 kubelet[2728]: I1106 05:27:04.800104 2728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01bbdde4-f3be-4868-9bd4-ca1075714d6d" path="/var/lib/kubelet/pods/01bbdde4-f3be-4868-9bd4-ca1075714d6d/volumes" Nov 6 05:27:04.837548 systemd[1]: Started sshd@7-10.0.0.73:22-10.0.0.1:60242.service - OpenSSH per-connection server daemon (10.0.0.1:60242). Nov 6 05:27:04.899803 containerd[1594]: time="2025-11-06T05:27:04.899741987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67b6bb984-wk547,Uid:310afece-2da4-4aef-a7e0-89b082fc02bd,Namespace:calico-system,Attempt:0,}" Nov 6 05:27:04.912867 kubelet[2728]: E1106 05:27:04.912822 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:04.934283 sshd[4020]: Accepted publickey for core from 10.0.0.1 port 60242 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:04.938323 sshd-session[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:04.948268 systemd-logind[1574]: New session 8 of user core. Nov 6 05:27:04.954726 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 05:27:05.106688 sshd[4042]: Connection closed by 10.0.0.1 port 60242 Nov 6 05:27:05.107372 sshd-session[4020]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:05.112168 systemd[1]: sshd@7-10.0.0.73:22-10.0.0.1:60242.service: Deactivated successfully. Nov 6 05:27:05.114631 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 05:27:05.116264 systemd-logind[1574]: Session 8 logged out. Waiting for processes to exit. Nov 6 05:27:05.117282 systemd-logind[1574]: Removed session 8. Nov 6 05:27:05.184407 systemd-networkd[1497]: cali30419b53c7d: Link UP Nov 6 05:27:05.185227 systemd-networkd[1497]: cali30419b53c7d: Gained carrier Nov 6 05:27:05.199673 containerd[1594]: 2025-11-06 05:27:05.007 [INFO][4052] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 05:27:05.199673 containerd[1594]: 2025-11-06 05:27:05.074 [INFO][4052] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--67b6bb984--wk547-eth0 whisker-67b6bb984- calico-system 310afece-2da4-4aef-a7e0-89b082fc02bd 985 0 2025-11-06 05:27:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:67b6bb984 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-67b6bb984-wk547 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali30419b53c7d [] [] }} ContainerID="0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" Namespace="calico-system" Pod="whisker-67b6bb984-wk547" WorkloadEndpoint="localhost-k8s-whisker--67b6bb984--wk547-" Nov 6 05:27:05.199673 containerd[1594]: 2025-11-06 05:27:05.075 [INFO][4052] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" Namespace="calico-system" Pod="whisker-67b6bb984-wk547" WorkloadEndpoint="localhost-k8s-whisker--67b6bb984--wk547-eth0" Nov 6 05:27:05.199673 containerd[1594]: 2025-11-06 05:27:05.141 [INFO][4079] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" HandleID="k8s-pod-network.0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" Workload="localhost-k8s-whisker--67b6bb984--wk547-eth0" Nov 6 05:27:05.199932 containerd[1594]: 2025-11-06 05:27:05.141 [INFO][4079] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" HandleID="k8s-pod-network.0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" Workload="localhost-k8s-whisker--67b6bb984--wk547-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005031d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-67b6bb984-wk547", "timestamp":"2025-11-06 05:27:05.141318829 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:27:05.199932 containerd[1594]: 2025-11-06 05:27:05.141 [INFO][4079] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:27:05.199932 containerd[1594]: 2025-11-06 05:27:05.142 [INFO][4079] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:27:05.199932 containerd[1594]: 2025-11-06 05:27:05.142 [INFO][4079] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:27:05.199932 containerd[1594]: 2025-11-06 05:27:05.150 [INFO][4079] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" host="localhost" Nov 6 05:27:05.199932 containerd[1594]: 2025-11-06 05:27:05.155 [INFO][4079] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:27:05.199932 containerd[1594]: 2025-11-06 05:27:05.159 [INFO][4079] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:27:05.199932 containerd[1594]: 2025-11-06 05:27:05.161 [INFO][4079] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:05.199932 containerd[1594]: 2025-11-06 05:27:05.162 [INFO][4079] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:05.199932 containerd[1594]: 2025-11-06 05:27:05.162 [INFO][4079] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" host="localhost" Nov 6 05:27:05.200155 containerd[1594]: 2025-11-06 05:27:05.164 [INFO][4079] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a Nov 6 05:27:05.200155 containerd[1594]: 2025-11-06 05:27:05.167 [INFO][4079] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" host="localhost" Nov 6 05:27:05.200155 containerd[1594]: 2025-11-06 05:27:05.172 [INFO][4079] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" host="localhost" Nov 6 05:27:05.200155 containerd[1594]: 2025-11-06 05:27:05.173 [INFO][4079] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" host="localhost" Nov 6 05:27:05.200155 containerd[1594]: 2025-11-06 05:27:05.173 [INFO][4079] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:27:05.200155 containerd[1594]: 2025-11-06 05:27:05.173 [INFO][4079] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" HandleID="k8s-pod-network.0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" Workload="localhost-k8s-whisker--67b6bb984--wk547-eth0" Nov 6 05:27:05.200278 containerd[1594]: 2025-11-06 05:27:05.176 [INFO][4052] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" Namespace="calico-system" Pod="whisker-67b6bb984-wk547" WorkloadEndpoint="localhost-k8s-whisker--67b6bb984--wk547-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--67b6bb984--wk547-eth0", GenerateName:"whisker-67b6bb984-", Namespace:"calico-system", SelfLink:"", UID:"310afece-2da4-4aef-a7e0-89b082fc02bd", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 27, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67b6bb984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-67b6bb984-wk547", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali30419b53c7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:05.200278 containerd[1594]: 2025-11-06 05:27:05.176 [INFO][4052] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" Namespace="calico-system" Pod="whisker-67b6bb984-wk547" WorkloadEndpoint="localhost-k8s-whisker--67b6bb984--wk547-eth0" Nov 6 05:27:05.200355 containerd[1594]: 2025-11-06 05:27:05.176 [INFO][4052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30419b53c7d ContainerID="0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" Namespace="calico-system" Pod="whisker-67b6bb984-wk547" WorkloadEndpoint="localhost-k8s-whisker--67b6bb984--wk547-eth0" Nov 6 05:27:05.200355 containerd[1594]: 2025-11-06 05:27:05.184 [INFO][4052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" Namespace="calico-system" Pod="whisker-67b6bb984-wk547" WorkloadEndpoint="localhost-k8s-whisker--67b6bb984--wk547-eth0" Nov 6 05:27:05.200397 containerd[1594]: 2025-11-06 05:27:05.185 [INFO][4052] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" Namespace="calico-system" Pod="whisker-67b6bb984-wk547" WorkloadEndpoint="localhost-k8s-whisker--67b6bb984--wk547-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--67b6bb984--wk547-eth0", GenerateName:"whisker-67b6bb984-", Namespace:"calico-system", SelfLink:"", UID:"310afece-2da4-4aef-a7e0-89b082fc02bd", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 27, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67b6bb984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a", Pod:"whisker-67b6bb984-wk547", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali30419b53c7d", MAC:"62:91:ea:ac:ca:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:05.200449 containerd[1594]: 2025-11-06 05:27:05.196 [INFO][4052] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" Namespace="calico-system" Pod="whisker-67b6bb984-wk547" WorkloadEndpoint="localhost-k8s-whisker--67b6bb984--wk547-eth0" Nov 6 05:27:05.421314 containerd[1594]: time="2025-11-06T05:27:05.418442085Z" level=info msg="connecting to shim 0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a" address="unix:///run/containerd/s/abc78470324eb7639ca06c5eb8652890eb9c63a55d29de323b80876f6cf75d00" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:27:05.514686 systemd[1]: Started cri-containerd-0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a.scope - libcontainer container 0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a. Nov 6 05:27:05.548134 systemd-resolved[1502]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:27:05.653646 containerd[1594]: time="2025-11-06T05:27:05.653601021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67b6bb984-wk547,Uid:310afece-2da4-4aef-a7e0-89b082fc02bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"0af42c47b6a0e96dac0be8d352560049eeb2a4004414194787e3dbd5b3116f9a\"" Nov 6 05:27:05.660278 containerd[1594]: time="2025-11-06T05:27:05.660240548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 05:27:05.801044 containerd[1594]: time="2025-11-06T05:27:05.800980735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd5b89b5c-rs8qz,Uid:0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a,Namespace:calico-apiserver,Attempt:0,}" Nov 6 05:27:05.905950 systemd-networkd[1497]: vxlan.calico: Link UP Nov 6 05:27:05.905964 systemd-networkd[1497]: vxlan.calico: Gained carrier Nov 6 05:27:05.949859 systemd-networkd[1497]: cali87974b76522: Link UP Nov 6 05:27:05.950863 systemd-networkd[1497]: cali87974b76522: Gained carrier Nov 6 05:27:05.971269 containerd[1594]: 2025-11-06 05:27:05.853 [INFO][4270] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fd5b89b5c--rs8qz-eth0 calico-apiserver-5fd5b89b5c- calico-apiserver 0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a 901 0 2025-11-06 05:26:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fd5b89b5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fd5b89b5c-rs8qz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali87974b76522 [] [] }} ContainerID="a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-rs8qz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--rs8qz-" Nov 6 05:27:05.971269 containerd[1594]: 2025-11-06 05:27:05.853 [INFO][4270] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-rs8qz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--rs8qz-eth0" Nov 6 05:27:05.971269 containerd[1594]: 2025-11-06 05:27:05.892 [INFO][4299] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" HandleID="k8s-pod-network.a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" Workload="localhost-k8s-calico--apiserver--5fd5b89b5c--rs8qz-eth0" Nov 6 05:27:05.971809 containerd[1594]: 2025-11-06 05:27:05.892 [INFO][4299] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" HandleID="k8s-pod-network.a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" Workload="localhost-k8s-calico--apiserver--5fd5b89b5c--rs8qz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001395f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5fd5b89b5c-rs8qz", "timestamp":"2025-11-06 05:27:05.892678849 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:27:05.971809 containerd[1594]: 2025-11-06 05:27:05.892 [INFO][4299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:27:05.971809 containerd[1594]: 2025-11-06 05:27:05.892 [INFO][4299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:27:05.971809 containerd[1594]: 2025-11-06 05:27:05.892 [INFO][4299] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:27:05.971809 containerd[1594]: 2025-11-06 05:27:05.903 [INFO][4299] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" host="localhost" Nov 6 05:27:05.971809 containerd[1594]: 2025-11-06 05:27:05.910 [INFO][4299] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:27:05.971809 containerd[1594]: 2025-11-06 05:27:05.916 [INFO][4299] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:27:05.971809 containerd[1594]: 2025-11-06 05:27:05.919 [INFO][4299] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:05.971809 containerd[1594]: 2025-11-06 05:27:05.921 [INFO][4299] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:05.971809 containerd[1594]: 2025-11-06 05:27:05.921 [INFO][4299] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" host="localhost" Nov 6 05:27:05.972160 containerd[1594]: 2025-11-06 05:27:05.923 [INFO][4299] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2 Nov 6 05:27:05.972160 containerd[1594]: 2025-11-06 05:27:05.928 [INFO][4299] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" host="localhost" Nov 6 05:27:05.972160 containerd[1594]: 2025-11-06 05:27:05.936 [INFO][4299] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" host="localhost" Nov 6 05:27:05.972160 containerd[1594]: 2025-11-06 05:27:05.936 [INFO][4299] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" host="localhost" Nov 6 05:27:05.972160 containerd[1594]: 2025-11-06 05:27:05.937 [INFO][4299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:27:05.972160 containerd[1594]: 2025-11-06 05:27:05.937 [INFO][4299] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" HandleID="k8s-pod-network.a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" Workload="localhost-k8s-calico--apiserver--5fd5b89b5c--rs8qz-eth0" Nov 6 05:27:05.972286 containerd[1594]: 2025-11-06 05:27:05.945 [INFO][4270] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-rs8qz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--rs8qz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fd5b89b5c--rs8qz-eth0", GenerateName:"calico-apiserver-5fd5b89b5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fd5b89b5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fd5b89b5c-rs8qz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87974b76522", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:05.972343 containerd[1594]: 2025-11-06 05:27:05.945 [INFO][4270] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-rs8qz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--rs8qz-eth0" Nov 6 05:27:05.972343 containerd[1594]: 2025-11-06 05:27:05.945 [INFO][4270] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87974b76522 ContainerID="a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-rs8qz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--rs8qz-eth0" Nov 6 05:27:05.972343 containerd[1594]: 2025-11-06 05:27:05.950 [INFO][4270] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-rs8qz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--rs8qz-eth0" Nov 6 05:27:05.972411 containerd[1594]: 2025-11-06 05:27:05.950 [INFO][4270] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-rs8qz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--rs8qz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fd5b89b5c--rs8qz-eth0", GenerateName:"calico-apiserver-5fd5b89b5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fd5b89b5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2", Pod:"calico-apiserver-5fd5b89b5c-rs8qz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87974b76522", MAC:"a2:a7:94:f1:a3:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:05.972466 containerd[1594]: 2025-11-06 05:27:05.964 [INFO][4270] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-rs8qz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--rs8qz-eth0" Nov 6 05:27:06.000080 containerd[1594]: time="2025-11-06T05:27:05.999994944Z" level=info msg="connecting to shim a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2" address="unix:///run/containerd/s/54afc8b82c2fcc6e7c314ee9f6a1da72ed051716525bb20f2b34eb525ffd24b4" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:27:06.036853 containerd[1594]: time="2025-11-06T05:27:06.036804039Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:06.037970 containerd[1594]: time="2025-11-06T05:27:06.037938611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 05:27:06.038091 containerd[1594]: time="2025-11-06T05:27:06.038011327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:06.038175 kubelet[2728]: E1106 05:27:06.038091 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:27:06.038528 kubelet[2728]: E1106 05:27:06.038188 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:27:06.038694 kubelet[2728]: E1106 05:27:06.038650 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6e8622edc8ce4da48c424b544722cdf7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b8rss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b6bb984-wk547_calico-system(310afece-2da4-4aef-a7e0-89b082fc02bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:06.039605 systemd[1]: Started cri-containerd-a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2.scope - libcontainer container a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2. Nov 6 05:27:06.040621 containerd[1594]: time="2025-11-06T05:27:06.040457093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 05:27:06.059975 systemd-resolved[1502]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:27:06.100087 containerd[1594]: time="2025-11-06T05:27:06.100020063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd5b89b5c-rs8qz,Uid:0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a7ec83049a88fe6cd0b0943350d34e5f6ecf5ed869b072e2769aa2bff83f88d2\"" Nov 6 05:27:06.423731 containerd[1594]: time="2025-11-06T05:27:06.423581800Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:06.425395 containerd[1594]: time="2025-11-06T05:27:06.425343030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 05:27:06.425571 containerd[1594]: time="2025-11-06T05:27:06.425392853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:06.425669 kubelet[2728]: E1106 05:27:06.425620 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:27:06.425731 kubelet[2728]: E1106 05:27:06.425685 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:27:06.426036 kubelet[2728]: E1106 05:27:06.425972 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8rss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b6bb984-wk547_calico-system(310afece-2da4-4aef-a7e0-89b082fc02bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:06.426165 containerd[1594]: time="2025-11-06T05:27:06.426092739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:27:06.427140 kubelet[2728]: E1106 05:27:06.427100 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67b6bb984-wk547" podUID="310afece-2da4-4aef-a7e0-89b082fc02bd" Nov 6 05:27:06.753986 containerd[1594]: time="2025-11-06T05:27:06.753919640Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:06.755225 containerd[1594]: time="2025-11-06T05:27:06.755180850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:27:06.755361 containerd[1594]: time="2025-11-06T05:27:06.755305425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:06.755561 kubelet[2728]: E1106 05:27:06.755497 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:27:06.755633 kubelet[2728]: E1106 05:27:06.755566 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:27:06.755833 kubelet[2728]: E1106 05:27:06.755749 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z94n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fd5b89b5c-rs8qz_calico-apiserver(0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:06.757002 kubelet[2728]: E1106 05:27:06.756949 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-rs8qz" podUID="0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a" Nov 6 05:27:06.794513 containerd[1594]: time="2025-11-06T05:27:06.793991436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z8s46,Uid:01f13972-7d24-4634-a6cb-bffbc3b083bb,Namespace:calico-system,Attempt:0,}" Nov 6 05:27:06.897389 systemd-networkd[1497]: cali33dc3ef5388: Link UP Nov 6 05:27:06.897656 systemd-networkd[1497]: cali33dc3ef5388: Gained carrier Nov 6 05:27:06.924788 containerd[1594]: 2025-11-06 05:27:06.831 [INFO][4423] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--z8s46-eth0 goldmane-666569f655- calico-system 01f13972-7d24-4634-a6cb-bffbc3b083bb 906 0 2025-11-06 05:26:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-z8s46 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali33dc3ef5388 [] [] }} ContainerID="b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" Namespace="calico-system" Pod="goldmane-666569f655-z8s46" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8s46-" Nov 6 05:27:06.924788 containerd[1594]: 2025-11-06 05:27:06.832 [INFO][4423] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" Namespace="calico-system" Pod="goldmane-666569f655-z8s46" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8s46-eth0" Nov 6 05:27:06.924788 containerd[1594]: 2025-11-06 05:27:06.857 [INFO][4437] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" HandleID="k8s-pod-network.b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" Workload="localhost-k8s-goldmane--666569f655--z8s46-eth0" Nov 6 05:27:06.925067 containerd[1594]: 2025-11-06 05:27:06.857 [INFO][4437] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" HandleID="k8s-pod-network.b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" Workload="localhost-k8s-goldmane--666569f655--z8s46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c72e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-z8s46", "timestamp":"2025-11-06 05:27:06.857539443 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:27:06.925067 containerd[1594]: 2025-11-06 05:27:06.857 [INFO][4437] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:27:06.925067 containerd[1594]: 2025-11-06 05:27:06.857 [INFO][4437] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:27:06.925067 containerd[1594]: 2025-11-06 05:27:06.857 [INFO][4437] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:27:06.925067 containerd[1594]: 2025-11-06 05:27:06.865 [INFO][4437] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" host="localhost" Nov 6 05:27:06.925067 containerd[1594]: 2025-11-06 05:27:06.870 [INFO][4437] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:27:06.925067 containerd[1594]: 2025-11-06 05:27:06.874 [INFO][4437] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:27:06.925067 containerd[1594]: 2025-11-06 05:27:06.876 [INFO][4437] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:06.925067 containerd[1594]: 2025-11-06 05:27:06.880 [INFO][4437] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:06.925067 containerd[1594]: 2025-11-06 05:27:06.880 [INFO][4437] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" host="localhost" Nov 6 05:27:06.925459 containerd[1594]: 2025-11-06 05:27:06.881 [INFO][4437] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960 Nov 6 05:27:06.925459 containerd[1594]: 2025-11-06 05:27:06.885 [INFO][4437] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" host="localhost" Nov 6 05:27:06.925459 containerd[1594]: 2025-11-06 05:27:06.891 [INFO][4437] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" host="localhost" Nov 6 05:27:06.925459 containerd[1594]: 2025-11-06 05:27:06.891 [INFO][4437] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" host="localhost" Nov 6 05:27:06.925459 containerd[1594]: 2025-11-06 05:27:06.891 [INFO][4437] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:27:06.925459 containerd[1594]: 2025-11-06 05:27:06.891 [INFO][4437] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" HandleID="k8s-pod-network.b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" Workload="localhost-k8s-goldmane--666569f655--z8s46-eth0" Nov 6 05:27:06.925683 containerd[1594]: 2025-11-06 05:27:06.894 [INFO][4423] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" Namespace="calico-system" Pod="goldmane-666569f655-z8s46" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8s46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--z8s46-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"01f13972-7d24-4634-a6cb-bffbc3b083bb", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 26, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-z8s46", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali33dc3ef5388", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:06.925683 containerd[1594]: 2025-11-06 05:27:06.895 [INFO][4423] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" Namespace="calico-system" Pod="goldmane-666569f655-z8s46" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8s46-eth0" Nov 6 05:27:06.925843 containerd[1594]: 2025-11-06 05:27:06.895 [INFO][4423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33dc3ef5388 ContainerID="b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" Namespace="calico-system" Pod="goldmane-666569f655-z8s46" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8s46-eth0" Nov 6 05:27:06.925843 containerd[1594]: 2025-11-06 05:27:06.897 [INFO][4423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" Namespace="calico-system" Pod="goldmane-666569f655-z8s46" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8s46-eth0" Nov 6 05:27:06.925917 containerd[1594]: 2025-11-06 05:27:06.899 [INFO][4423] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" Namespace="calico-system" Pod="goldmane-666569f655-z8s46" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8s46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--z8s46-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"01f13972-7d24-4634-a6cb-bffbc3b083bb", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 26, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960", Pod:"goldmane-666569f655-z8s46", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali33dc3ef5388", MAC:"fa:42:e2:9f:82:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:06.925971 containerd[1594]: 2025-11-06 05:27:06.919 [INFO][4423] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" Namespace="calico-system" Pod="goldmane-666569f655-z8s46" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z8s46-eth0" Nov 6 05:27:06.931171 kubelet[2728]: E1106 05:27:06.930722 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-rs8qz" podUID="0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a" Nov 6 05:27:06.933086 kubelet[2728]: E1106 05:27:06.933024 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67b6bb984-wk547" podUID="310afece-2da4-4aef-a7e0-89b082fc02bd" Nov 6 05:27:06.963375 containerd[1594]: time="2025-11-06T05:27:06.963200633Z" level=info msg="connecting to shim b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960" address="unix:///run/containerd/s/11abd32b2aee6243b4a631482d45e4532e080698cb81cc2926331b571fc19dc9" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:27:06.996747 systemd[1]: Started cri-containerd-b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960.scope - libcontainer container b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960. Nov 6 05:27:07.013006 systemd-resolved[1502]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:27:07.048416 containerd[1594]: time="2025-11-06T05:27:07.048364166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z8s46,Uid:01f13972-7d24-4634-a6cb-bffbc3b083bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"b1e3fce326a3a24cfb4d3ae3bced1021c562009a8adfd6f6e09a05157679d960\"" Nov 6 05:27:07.050220 containerd[1594]: time="2025-11-06T05:27:07.050178013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 05:27:07.090703 systemd-networkd[1497]: cali30419b53c7d: Gained IPv6LL Nov 6 05:27:07.404827 containerd[1594]: time="2025-11-06T05:27:07.404680037Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:07.406461 containerd[1594]: time="2025-11-06T05:27:07.406395199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 05:27:07.406616 containerd[1594]: time="2025-11-06T05:27:07.406526926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:07.406790 kubelet[2728]: E1106 05:27:07.406722 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:27:07.407104 kubelet[2728]: E1106 05:27:07.406794 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:27:07.407104 kubelet[2728]: E1106 05:27:07.406955 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpnkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z8s46_calico-system(01f13972-7d24-4634-a6cb-bffbc3b083bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:07.408185 kubelet[2728]: E1106 05:27:07.408143 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8s46" podUID="01f13972-7d24-4634-a6cb-bffbc3b083bb" Nov 6 05:27:07.922803 systemd-networkd[1497]: vxlan.calico: Gained IPv6LL Nov 6 05:27:07.923172 systemd-networkd[1497]: cali87974b76522: Gained IPv6LL Nov 6 05:27:07.929705 kubelet[2728]: E1106 05:27:07.929664 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8s46" podUID="01f13972-7d24-4634-a6cb-bffbc3b083bb" Nov 6 05:27:07.929991 kubelet[2728]: E1106 05:27:07.929789 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-rs8qz" podUID="0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a" Nov 6 05:27:08.562681 systemd-networkd[1497]: cali33dc3ef5388: Gained IPv6LL Nov 6 05:27:08.797042 containerd[1594]: time="2025-11-06T05:27:08.796993327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd5b89b5c-qh578,Uid:fbee8d7b-086c-45da-82fb-11a1baff350a,Namespace:calico-apiserver,Attempt:0,}" Nov 6 05:27:08.932678 kubelet[2728]: E1106 05:27:08.932526 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8s46" podUID="01f13972-7d24-4634-a6cb-bffbc3b083bb" Nov 6 05:27:08.942632 systemd-networkd[1497]: calia5fd46855fb: Link UP Nov 6 05:27:08.943323 systemd-networkd[1497]: calia5fd46855fb: Gained carrier Nov 6 05:27:08.955322 containerd[1594]: 2025-11-06 05:27:08.832 [INFO][4510] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fd5b89b5c--qh578-eth0 calico-apiserver-5fd5b89b5c- calico-apiserver fbee8d7b-086c-45da-82fb-11a1baff350a 907 0 2025-11-06 05:26:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fd5b89b5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fd5b89b5c-qh578 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia5fd46855fb [] [] }} ContainerID="fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-qh578" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--qh578-" Nov 6 05:27:08.955322 containerd[1594]: 2025-11-06 05:27:08.832 [INFO][4510] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-qh578" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--qh578-eth0" Nov 6 05:27:08.955322 containerd[1594]: 2025-11-06 05:27:08.870 [INFO][4524] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" HandleID="k8s-pod-network.fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" Workload="localhost-k8s-calico--apiserver--5fd5b89b5c--qh578-eth0" Nov 6 05:27:08.955593 containerd[1594]: 2025-11-06 05:27:08.870 [INFO][4524] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" HandleID="k8s-pod-network.fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" Workload="localhost-k8s-calico--apiserver--5fd5b89b5c--qh578-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00049be70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5fd5b89b5c-qh578", "timestamp":"2025-11-06 05:27:08.87027936 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:27:08.955593 containerd[1594]: 2025-11-06 05:27:08.870 [INFO][4524] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:27:08.955593 containerd[1594]: 2025-11-06 05:27:08.870 [INFO][4524] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:27:08.955593 containerd[1594]: 2025-11-06 05:27:08.870 [INFO][4524] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:27:08.955593 containerd[1594]: 2025-11-06 05:27:08.892 [INFO][4524] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" host="localhost" Nov 6 05:27:08.955593 containerd[1594]: 2025-11-06 05:27:08.916 [INFO][4524] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:27:08.955593 containerd[1594]: 2025-11-06 05:27:08.920 [INFO][4524] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:27:08.955593 containerd[1594]: 2025-11-06 05:27:08.921 [INFO][4524] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:08.955593 containerd[1594]: 2025-11-06 05:27:08.924 [INFO][4524] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:08.955593 containerd[1594]: 2025-11-06 05:27:08.924 [INFO][4524] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" host="localhost" Nov 6 05:27:08.955837 containerd[1594]: 2025-11-06 05:27:08.925 [INFO][4524] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378 Nov 6 05:27:08.955837 containerd[1594]: 2025-11-06 05:27:08.929 [INFO][4524] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" host="localhost" Nov 6 05:27:08.955837 containerd[1594]: 2025-11-06 05:27:08.935 [INFO][4524] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" host="localhost" Nov 6 05:27:08.955837 containerd[1594]: 2025-11-06 05:27:08.935 [INFO][4524] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" host="localhost" Nov 6 05:27:08.955837 containerd[1594]: 2025-11-06 05:27:08.935 [INFO][4524] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:27:08.955837 containerd[1594]: 2025-11-06 05:27:08.935 [INFO][4524] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" HandleID="k8s-pod-network.fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" Workload="localhost-k8s-calico--apiserver--5fd5b89b5c--qh578-eth0" Nov 6 05:27:08.955977 containerd[1594]: 2025-11-06 05:27:08.939 [INFO][4510] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-qh578" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--qh578-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fd5b89b5c--qh578-eth0", GenerateName:"calico-apiserver-5fd5b89b5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fbee8d7b-086c-45da-82fb-11a1baff350a", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fd5b89b5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fd5b89b5c-qh578", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia5fd46855fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:08.956049 containerd[1594]: 2025-11-06 05:27:08.939 [INFO][4510] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-qh578" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--qh578-eth0" Nov 6 05:27:08.956049 containerd[1594]: 2025-11-06 05:27:08.939 [INFO][4510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5fd46855fb ContainerID="fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-qh578" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--qh578-eth0" Nov 6 05:27:08.956049 containerd[1594]: 2025-11-06 05:27:08.942 [INFO][4510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-qh578" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--qh578-eth0" Nov 6 05:27:08.956117 containerd[1594]: 2025-11-06 05:27:08.942 [INFO][4510] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-qh578" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--qh578-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fd5b89b5c--qh578-eth0", GenerateName:"calico-apiserver-5fd5b89b5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fbee8d7b-086c-45da-82fb-11a1baff350a", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fd5b89b5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378", Pod:"calico-apiserver-5fd5b89b5c-qh578", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia5fd46855fb", MAC:"46:32:ed:11:2a:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:08.956170 containerd[1594]: 2025-11-06 05:27:08.952 [INFO][4510] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" Namespace="calico-apiserver" Pod="calico-apiserver-5fd5b89b5c-qh578" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fd5b89b5c--qh578-eth0" Nov 6 05:27:09.382008 containerd[1594]: time="2025-11-06T05:27:09.381954902Z" level=info msg="connecting to shim fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378" address="unix:///run/containerd/s/1ee4ef2b2bfd0859c65b31b4cdd53df081bd105b3d7859aab72b514221f3b2c3" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:27:09.412711 systemd[1]: Started cri-containerd-fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378.scope - libcontainer container fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378. Nov 6 05:27:09.428756 systemd-resolved[1502]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:27:09.460491 containerd[1594]: time="2025-11-06T05:27:09.460426992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fd5b89b5c-qh578,Uid:fbee8d7b-086c-45da-82fb-11a1baff350a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fea1fcbc385f60c3e347c58227a09064ab2987f173a1668a64b0d453b1052378\"" Nov 6 05:27:09.462272 containerd[1594]: time="2025-11-06T05:27:09.462237573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:27:09.794123 kubelet[2728]: E1106 05:27:09.794008 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:09.794302 kubelet[2728]: E1106 05:27:09.794206 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:09.795136 containerd[1594]: time="2025-11-06T05:27:09.795012418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b64844bb-tt8w9,Uid:08a93b65-3d61-4796-a494-3e40e8cba374,Namespace:calico-system,Attempt:0,}" Nov 6 05:27:09.795639 containerd[1594]: time="2025-11-06T05:27:09.795115802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m5td6,Uid:600f2205-e1d2-4e4b-bac1-c6ad7172351f,Namespace:kube-system,Attempt:0,}" Nov 6 05:27:09.795812 containerd[1594]: time="2025-11-06T05:27:09.795629637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nwc4r,Uid:a9bd2856-b08a-4413-8d50-2bcdbab25838,Namespace:kube-system,Attempt:0,}" Nov 6 05:27:09.797086 containerd[1594]: time="2025-11-06T05:27:09.795820094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gjsxf,Uid:a5b9a4e3-c326-4625-87d3-4243511ed604,Namespace:calico-system,Attempt:0,}" Nov 6 05:27:09.808558 containerd[1594]: time="2025-11-06T05:27:09.808497006Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:09.821350 containerd[1594]: time="2025-11-06T05:27:09.821166805Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:27:09.821350 containerd[1594]: time="2025-11-06T05:27:09.821209766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:09.821738 kubelet[2728]: E1106 05:27:09.821681 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:27:09.821807 kubelet[2728]: E1106 05:27:09.821778 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:27:09.822024 kubelet[2728]: E1106 05:27:09.821957 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65m2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fd5b89b5c-qh578_calico-apiserver(fbee8d7b-086c-45da-82fb-11a1baff350a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:09.824001 kubelet[2728]: E1106 05:27:09.823629 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-qh578" podUID="fbee8d7b-086c-45da-82fb-11a1baff350a" Nov 6 05:27:09.942454 kubelet[2728]: E1106 05:27:09.942398 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-qh578" podUID="fbee8d7b-086c-45da-82fb-11a1baff350a" Nov 6 05:27:10.034697 systemd-networkd[1497]: calia5fd46855fb: Gained IPv6LL Nov 6 05:27:10.064794 systemd-networkd[1497]: cali8ce5d0e4f2c: Link UP Nov 6 05:27:10.065587 systemd-networkd[1497]: cali8ce5d0e4f2c: Gained carrier Nov 6 05:27:10.079037 containerd[1594]: 2025-11-06 05:27:09.910 [INFO][4601] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--m5td6-eth0 coredns-674b8bbfcf- kube-system 600f2205-e1d2-4e4b-bac1-c6ad7172351f 897 0 2025-11-06 05:26:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-m5td6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8ce5d0e4f2c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5td6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5td6-" Nov 6 05:27:10.079037 containerd[1594]: 2025-11-06 05:27:09.910 [INFO][4601] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5td6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5td6-eth0" Nov 6 05:27:10.079037 containerd[1594]: 2025-11-06 05:27:09.970 [INFO][4647] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" HandleID="k8s-pod-network.69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" Workload="localhost-k8s-coredns--674b8bbfcf--m5td6-eth0" Nov 6 05:27:10.079301 containerd[1594]: 2025-11-06 05:27:09.976 [INFO][4647] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" HandleID="k8s-pod-network.69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" Workload="localhost-k8s-coredns--674b8bbfcf--m5td6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a35d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-m5td6", "timestamp":"2025-11-06 05:27:09.970451156 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:27:10.079301 containerd[1594]: 2025-11-06 05:27:09.977 [INFO][4647] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:27:10.079301 containerd[1594]: 2025-11-06 05:27:09.977 [INFO][4647] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:27:10.079301 containerd[1594]: 2025-11-06 05:27:09.977 [INFO][4647] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:27:10.079301 containerd[1594]: 2025-11-06 05:27:09.992 [INFO][4647] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" host="localhost" Nov 6 05:27:10.079301 containerd[1594]: 2025-11-06 05:27:10.004 [INFO][4647] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:27:10.079301 containerd[1594]: 2025-11-06 05:27:10.017 [INFO][4647] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:27:10.079301 containerd[1594]: 2025-11-06 05:27:10.019 [INFO][4647] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:10.079301 containerd[1594]: 2025-11-06 05:27:10.022 [INFO][4647] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:10.079301 containerd[1594]: 2025-11-06 05:27:10.022 [INFO][4647] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" host="localhost" Nov 6 05:27:10.079558 containerd[1594]: 2025-11-06 05:27:10.024 [INFO][4647] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d Nov 6 05:27:10.079558 containerd[1594]: 2025-11-06 05:27:10.050 [INFO][4647] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" host="localhost" Nov 6 05:27:10.079558 containerd[1594]: 2025-11-06 05:27:10.057 [INFO][4647] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" host="localhost" Nov 6 05:27:10.079558 containerd[1594]: 2025-11-06 05:27:10.057 [INFO][4647] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" host="localhost" Nov 6 05:27:10.079558 containerd[1594]: 2025-11-06 05:27:10.058 [INFO][4647] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:27:10.079558 containerd[1594]: 2025-11-06 05:27:10.058 [INFO][4647] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" HandleID="k8s-pod-network.69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" Workload="localhost-k8s-coredns--674b8bbfcf--m5td6-eth0" Nov 6 05:27:10.079723 containerd[1594]: 2025-11-06 05:27:10.062 [INFO][4601] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5td6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5td6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--m5td6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"600f2205-e1d2-4e4b-bac1-c6ad7172351f", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-m5td6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ce5d0e4f2c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:10.079785 containerd[1594]: 2025-11-06 05:27:10.062 [INFO][4601] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5td6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5td6-eth0" Nov 6 05:27:10.079785 containerd[1594]: 2025-11-06 05:27:10.062 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ce5d0e4f2c ContainerID="69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5td6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5td6-eth0" Nov 6 05:27:10.079785 containerd[1594]: 2025-11-06 05:27:10.066 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5td6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5td6-eth0" Nov 6 05:27:10.079853 containerd[1594]: 2025-11-06 05:27:10.066 [INFO][4601] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5td6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5td6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--m5td6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"600f2205-e1d2-4e4b-bac1-c6ad7172351f", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d", Pod:"coredns-674b8bbfcf-m5td6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ce5d0e4f2c", MAC:"f2:b0:18:cf:b6:94", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:10.079853 containerd[1594]: 2025-11-06 05:27:10.076 [INFO][4601] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m5td6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m5td6-eth0" Nov 6 05:27:10.110660 containerd[1594]: time="2025-11-06T05:27:10.110599426Z" level=info msg="connecting to shim 69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d" address="unix:///run/containerd/s/06a518cbe20f4e1774e99e0a8f449520af3a059b44b5f756799838b800734d61" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:27:10.121863 systemd[1]: Started sshd@8-10.0.0.73:22-10.0.0.1:57758.service - OpenSSH per-connection server daemon (10.0.0.1:57758). Nov 6 05:27:10.142913 systemd-networkd[1497]: calif5bc5b76366: Link UP Nov 6 05:27:10.143247 systemd-networkd[1497]: calif5bc5b76366: Gained carrier Nov 6 05:27:10.166696 systemd[1]: Started cri-containerd-69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d.scope - libcontainer container 69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d. Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:09.953 [INFO][4631] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gjsxf-eth0 csi-node-driver- calico-system a5b9a4e3-c326-4625-87d3-4243511ed604 769 0 2025-11-06 05:26:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gjsxf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif5bc5b76366 [] [] }} ContainerID="d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" Namespace="calico-system" Pod="csi-node-driver-gjsxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--gjsxf-" Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:09.953 [INFO][4631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" Namespace="calico-system" Pod="csi-node-driver-gjsxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--gjsxf-eth0" Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.026 [INFO][4661] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" HandleID="k8s-pod-network.d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" Workload="localhost-k8s-csi--node--driver--gjsxf-eth0" Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.027 [INFO][4661] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" HandleID="k8s-pod-network.d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" Workload="localhost-k8s-csi--node--driver--gjsxf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000506970), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gjsxf", "timestamp":"2025-11-06 05:27:10.026730384 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.027 [INFO][4661] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.057 [INFO][4661] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.057 [INFO][4661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.092 [INFO][4661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" host="localhost" Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.111 [INFO][4661] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.115 [INFO][4661] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.116 [INFO][4661] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.119 [INFO][4661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.119 [INFO][4661] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" host="localhost" Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.121 [INFO][4661] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59 Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.125 [INFO][4661] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" host="localhost" Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.131 [INFO][4661] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" host="localhost" Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.131 [INFO][4661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" host="localhost" Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.131 [INFO][4661] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:27:10.176552 containerd[1594]: 2025-11-06 05:27:10.131 [INFO][4661] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" HandleID="k8s-pod-network.d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" Workload="localhost-k8s-csi--node--driver--gjsxf-eth0" Nov 6 05:27:10.177162 containerd[1594]: 2025-11-06 05:27:10.136 [INFO][4631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" Namespace="calico-system" Pod="csi-node-driver-gjsxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--gjsxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gjsxf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5b9a4e3-c326-4625-87d3-4243511ed604", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gjsxf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif5bc5b76366", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:10.177162 containerd[1594]: 2025-11-06 05:27:10.136 [INFO][4631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" Namespace="calico-system" Pod="csi-node-driver-gjsxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--gjsxf-eth0" Nov 6 05:27:10.177162 containerd[1594]: 2025-11-06 05:27:10.136 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5bc5b76366 ContainerID="d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" Namespace="calico-system" Pod="csi-node-driver-gjsxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--gjsxf-eth0" Nov 6 05:27:10.177162 containerd[1594]: 2025-11-06 05:27:10.151 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" Namespace="calico-system" Pod="csi-node-driver-gjsxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--gjsxf-eth0" Nov 6 05:27:10.177162 containerd[1594]: 2025-11-06 05:27:10.154 [INFO][4631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" Namespace="calico-system" Pod="csi-node-driver-gjsxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--gjsxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gjsxf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5b9a4e3-c326-4625-87d3-4243511ed604", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59", Pod:"csi-node-driver-gjsxf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif5bc5b76366", MAC:"02:8b:1b:8b:f0:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:10.177162 containerd[1594]: 2025-11-06 05:27:10.169 [INFO][4631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" Namespace="calico-system" Pod="csi-node-driver-gjsxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--gjsxf-eth0" Nov 6 05:27:10.195550 systemd-resolved[1502]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:27:10.204490 sshd[4712]: Accepted publickey for core from 10.0.0.1 port 57758 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:10.208588 containerd[1594]: time="2025-11-06T05:27:10.208496695Z" level=info msg="connecting to shim d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59" address="unix:///run/containerd/s/db44fd642a41c12b04a9298020c62b69f18cb5d94731c5bd39006b32051ed0cc" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:27:10.209111 sshd-session[4712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:10.217300 systemd-logind[1574]: New session 9 of user core. Nov 6 05:27:10.222772 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 05:27:10.244845 systemd[1]: Started cri-containerd-d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59.scope - libcontainer container d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59. Nov 6 05:27:10.249217 containerd[1594]: time="2025-11-06T05:27:10.249177220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m5td6,Uid:600f2205-e1d2-4e4b-bac1-c6ad7172351f,Namespace:kube-system,Attempt:0,} returns sandbox id \"69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d\"" Nov 6 05:27:10.254256 kubelet[2728]: E1106 05:27:10.253559 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:10.262554 containerd[1594]: time="2025-11-06T05:27:10.262434781Z" level=info msg="CreateContainer within sandbox \"69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 05:27:10.268741 systemd-resolved[1502]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:27:10.273405 containerd[1594]: time="2025-11-06T05:27:10.272870992Z" level=info msg="Container 8ad1bc21993068457097fe846e4ca18c5f609eb8a73f97bebce675cf13717a11: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:27:10.281045 systemd-networkd[1497]: cali5233d816f5e: Link UP Nov 6 05:27:10.289883 systemd-networkd[1497]: cali5233d816f5e: Gained carrier Nov 6 05:27:10.290949 containerd[1594]: time="2025-11-06T05:27:10.290916685Z" level=info msg="CreateContainer within sandbox \"69d714b8811d1d1a141e10f9a9c5faa7b687640c4b39d4dcfb26b60a025b151d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ad1bc21993068457097fe846e4ca18c5f609eb8a73f97bebce675cf13717a11\"" Nov 6 05:27:10.294015 containerd[1594]: time="2025-11-06T05:27:10.293970070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gjsxf,Uid:a5b9a4e3-c326-4625-87d3-4243511ed604,Namespace:calico-system,Attempt:0,} returns sandbox id \"d6460db3391a28971429695c15a8eb2e187d34af291923d0a4bd09b935866f59\"" Nov 6 05:27:10.295053 containerd[1594]: time="2025-11-06T05:27:10.295018058Z" level=info msg="StartContainer for \"8ad1bc21993068457097fe846e4ca18c5f609eb8a73f97bebce675cf13717a11\"" Nov 6 05:27:10.295993 containerd[1594]: time="2025-11-06T05:27:10.295770351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 05:27:10.301842 containerd[1594]: time="2025-11-06T05:27:10.301799364Z" level=info msg="connecting to shim 8ad1bc21993068457097fe846e4ca18c5f609eb8a73f97bebce675cf13717a11" address="unix:///run/containerd/s/06a518cbe20f4e1774e99e0a8f449520af3a059b44b5f756799838b800734d61" protocol=ttrpc version=3 Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:09.915 [INFO][4588] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--54b64844bb--tt8w9-eth0 calico-kube-controllers-54b64844bb- calico-system 08a93b65-3d61-4796-a494-3e40e8cba374 902 0 2025-11-06 05:26:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54b64844bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-54b64844bb-tt8w9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5233d816f5e [] [] }} ContainerID="9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" Namespace="calico-system" Pod="calico-kube-controllers-54b64844bb-tt8w9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b64844bb--tt8w9-" Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:09.916 [INFO][4588] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" Namespace="calico-system" Pod="calico-kube-controllers-54b64844bb-tt8w9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b64844bb--tt8w9-eth0" Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.027 [INFO][4655] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" HandleID="k8s-pod-network.9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" Workload="localhost-k8s-calico--kube--controllers--54b64844bb--tt8w9-eth0" Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.028 [INFO][4655] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" HandleID="k8s-pod-network.9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" Workload="localhost-k8s-calico--kube--controllers--54b64844bb--tt8w9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-54b64844bb-tt8w9", "timestamp":"2025-11-06 05:27:10.027909149 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.028 [INFO][4655] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.131 [INFO][4655] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.131 [INFO][4655] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.194 [INFO][4655] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" host="localhost" Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.212 [INFO][4655] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.223 [INFO][4655] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.227 [INFO][4655] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.235 [INFO][4655] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.235 [INFO][4655] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" host="localhost" Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.239 [INFO][4655] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.250 [INFO][4655] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" host="localhost" Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.263 [INFO][4655] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" host="localhost" Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.263 [INFO][4655] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" host="localhost" Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.263 [INFO][4655] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:27:10.317040 containerd[1594]: 2025-11-06 05:27:10.263 [INFO][4655] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" HandleID="k8s-pod-network.9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" Workload="localhost-k8s-calico--kube--controllers--54b64844bb--tt8w9-eth0" Nov 6 05:27:10.317656 containerd[1594]: 2025-11-06 05:27:10.268 [INFO][4588] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" Namespace="calico-system" Pod="calico-kube-controllers-54b64844bb-tt8w9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b64844bb--tt8w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54b64844bb--tt8w9-eth0", GenerateName:"calico-kube-controllers-54b64844bb-", Namespace:"calico-system", SelfLink:"", UID:"08a93b65-3d61-4796-a494-3e40e8cba374", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b64844bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-54b64844bb-tt8w9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5233d816f5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:10.317656 containerd[1594]: 2025-11-06 05:27:10.269 [INFO][4588] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" Namespace="calico-system" Pod="calico-kube-controllers-54b64844bb-tt8w9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b64844bb--tt8w9-eth0" Nov 6 05:27:10.317656 containerd[1594]: 2025-11-06 05:27:10.269 [INFO][4588] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5233d816f5e ContainerID="9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" Namespace="calico-system" Pod="calico-kube-controllers-54b64844bb-tt8w9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b64844bb--tt8w9-eth0" Nov 6 05:27:10.317656 containerd[1594]: 2025-11-06 05:27:10.293 [INFO][4588] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" Namespace="calico-system" Pod="calico-kube-controllers-54b64844bb-tt8w9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b64844bb--tt8w9-eth0" Nov 6 05:27:10.317656 containerd[1594]: 2025-11-06 05:27:10.297 [INFO][4588] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" Namespace="calico-system" Pod="calico-kube-controllers-54b64844bb-tt8w9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b64844bb--tt8w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54b64844bb--tt8w9-eth0", GenerateName:"calico-kube-controllers-54b64844bb-", Namespace:"calico-system", SelfLink:"", UID:"08a93b65-3d61-4796-a494-3e40e8cba374", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b64844bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d", Pod:"calico-kube-controllers-54b64844bb-tt8w9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5233d816f5e", MAC:"e2:af:11:de:22:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:10.317656 containerd[1594]: 2025-11-06 05:27:10.309 [INFO][4588] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" Namespace="calico-system" Pod="calico-kube-controllers-54b64844bb-tt8w9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b64844bb--tt8w9-eth0" Nov 6 05:27:10.333654 systemd[1]: Started cri-containerd-8ad1bc21993068457097fe846e4ca18c5f609eb8a73f97bebce675cf13717a11.scope - libcontainer container 8ad1bc21993068457097fe846e4ca18c5f609eb8a73f97bebce675cf13717a11. Nov 6 05:27:10.363849 systemd-networkd[1497]: cali9d66a061b25: Link UP Nov 6 05:27:10.364971 containerd[1594]: time="2025-11-06T05:27:10.364694713Z" level=info msg="connecting to shim 9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d" address="unix:///run/containerd/s/3fc490871b8996705a424d911b9afaed5bba4be737605022bb0955824ec23cdf" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:27:10.365125 systemd-networkd[1497]: cali9d66a061b25: Gained carrier Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:09.946 [INFO][4620] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--nwc4r-eth0 coredns-674b8bbfcf- kube-system a9bd2856-b08a-4413-8d50-2bcdbab25838 904 0 2025-11-06 05:26:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-nwc4r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9d66a061b25 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" Namespace="kube-system" Pod="coredns-674b8bbfcf-nwc4r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nwc4r-" Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:09.950 [INFO][4620] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" Namespace="kube-system" Pod="coredns-674b8bbfcf-nwc4r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nwc4r-eth0" Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.038 [INFO][4669] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" HandleID="k8s-pod-network.9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" Workload="localhost-k8s-coredns--674b8bbfcf--nwc4r-eth0" Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.038 [INFO][4669] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" HandleID="k8s-pod-network.9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" Workload="localhost-k8s-coredns--674b8bbfcf--nwc4r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1d80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-nwc4r", "timestamp":"2025-11-06 05:27:10.038447572 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.038 [INFO][4669] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.264 [INFO][4669] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.264 [INFO][4669] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.293 [INFO][4669] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" host="localhost" Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.313 [INFO][4669] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.321 [INFO][4669] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.324 [INFO][4669] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.329 [INFO][4669] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.329 [INFO][4669] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" host="localhost" Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.330 [INFO][4669] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394 Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.337 [INFO][4669] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" host="localhost" Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.349 [INFO][4669] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" host="localhost" Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.349 [INFO][4669] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" host="localhost" Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.349 [INFO][4669] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:27:10.394228 containerd[1594]: 2025-11-06 05:27:10.349 [INFO][4669] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" HandleID="k8s-pod-network.9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" Workload="localhost-k8s-coredns--674b8bbfcf--nwc4r-eth0" Nov 6 05:27:10.394852 containerd[1594]: 2025-11-06 05:27:10.355 [INFO][4620] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" Namespace="kube-system" Pod="coredns-674b8bbfcf-nwc4r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nwc4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nwc4r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a9bd2856-b08a-4413-8d50-2bcdbab25838", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-nwc4r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d66a061b25", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:10.394852 containerd[1594]: 2025-11-06 05:27:10.355 [INFO][4620] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" Namespace="kube-system" Pod="coredns-674b8bbfcf-nwc4r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nwc4r-eth0" Nov 6 05:27:10.394852 containerd[1594]: 2025-11-06 05:27:10.355 [INFO][4620] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d66a061b25 ContainerID="9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" Namespace="kube-system" Pod="coredns-674b8bbfcf-nwc4r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nwc4r-eth0" Nov 6 05:27:10.394852 containerd[1594]: 2025-11-06 05:27:10.365 [INFO][4620] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" Namespace="kube-system" Pod="coredns-674b8bbfcf-nwc4r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nwc4r-eth0" Nov 6 05:27:10.394852 containerd[1594]: 2025-11-06 05:27:10.366 [INFO][4620] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" Namespace="kube-system" Pod="coredns-674b8bbfcf-nwc4r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nwc4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nwc4r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a9bd2856-b08a-4413-8d50-2bcdbab25838", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394", Pod:"coredns-674b8bbfcf-nwc4r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d66a061b25", MAC:"aa:de:6a:07:a8:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:27:10.394852 containerd[1594]: 2025-11-06 05:27:10.383 [INFO][4620] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" Namespace="kube-system" Pod="coredns-674b8bbfcf-nwc4r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nwc4r-eth0" Nov 6 05:27:10.396807 sshd[4775]: Connection closed by 10.0.0.1 port 57758 Nov 6 05:27:10.397350 sshd-session[4712]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:10.405941 systemd[1]: sshd@8-10.0.0.73:22-10.0.0.1:57758.service: Deactivated successfully. Nov 6 05:27:10.406904 containerd[1594]: time="2025-11-06T05:27:10.406874054Z" level=info msg="StartContainer for \"8ad1bc21993068457097fe846e4ca18c5f609eb8a73f97bebce675cf13717a11\" returns successfully" Nov 6 05:27:10.411188 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 05:27:10.415118 systemd-logind[1574]: Session 9 logged out. Waiting for processes to exit. Nov 6 05:27:10.427664 systemd[1]: Started cri-containerd-9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d.scope - libcontainer container 9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d. Nov 6 05:27:10.428172 containerd[1594]: time="2025-11-06T05:27:10.428127222Z" level=info msg="connecting to shim 9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394" address="unix:///run/containerd/s/269e92a25fe654eaf8844681d0f1eb28f83de32345a2d69f9cde0abc750566b8" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:27:10.428972 systemd-logind[1574]: Removed session 9. Nov 6 05:27:10.445198 systemd-resolved[1502]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:27:10.464609 systemd[1]: Started cri-containerd-9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394.scope - libcontainer container 9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394. Nov 6 05:27:10.484626 systemd-resolved[1502]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 05:27:10.488349 containerd[1594]: time="2025-11-06T05:27:10.488199438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b64844bb-tt8w9,Uid:08a93b65-3d61-4796-a494-3e40e8cba374,Namespace:calico-system,Attempt:0,} returns sandbox id \"9df8d634ecd57d06d21287bc4e769bc2e124959595a49103b229a78a1ea57f6d\"" Nov 6 05:27:10.532790 containerd[1594]: time="2025-11-06T05:27:10.532734864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nwc4r,Uid:a9bd2856-b08a-4413-8d50-2bcdbab25838,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394\"" Nov 6 05:27:10.533538 kubelet[2728]: E1106 05:27:10.533510 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:10.555613 containerd[1594]: time="2025-11-06T05:27:10.555556227Z" level=info msg="CreateContainer within sandbox \"9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 05:27:10.577562 containerd[1594]: time="2025-11-06T05:27:10.576709256Z" level=info msg="Container 1614576e67ee7f516e43a1f3cd8587394b456a83235d417debad28d64114a783: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:27:10.587003 containerd[1594]: time="2025-11-06T05:27:10.586961542Z" level=info msg="CreateContainer within sandbox \"9bd81c728b555121151d8337d7ba13ee38d0d36e19897dd411765f9e2c627394\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1614576e67ee7f516e43a1f3cd8587394b456a83235d417debad28d64114a783\"" Nov 6 05:27:10.587582 containerd[1594]: time="2025-11-06T05:27:10.587557941Z" level=info msg="StartContainer for \"1614576e67ee7f516e43a1f3cd8587394b456a83235d417debad28d64114a783\"" Nov 6 05:27:10.588766 containerd[1594]: time="2025-11-06T05:27:10.588738499Z" level=info msg="connecting to shim 1614576e67ee7f516e43a1f3cd8587394b456a83235d417debad28d64114a783" address="unix:///run/containerd/s/269e92a25fe654eaf8844681d0f1eb28f83de32345a2d69f9cde0abc750566b8" protocol=ttrpc version=3 Nov 6 05:27:10.615786 systemd[1]: Started cri-containerd-1614576e67ee7f516e43a1f3cd8587394b456a83235d417debad28d64114a783.scope - libcontainer container 1614576e67ee7f516e43a1f3cd8587394b456a83235d417debad28d64114a783. Nov 6 05:27:10.652679 containerd[1594]: time="2025-11-06T05:27:10.652620381Z" level=info msg="StartContainer for \"1614576e67ee7f516e43a1f3cd8587394b456a83235d417debad28d64114a783\" returns successfully" Nov 6 05:27:10.654875 containerd[1594]: time="2025-11-06T05:27:10.654839399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:10.656134 containerd[1594]: time="2025-11-06T05:27:10.656104574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 05:27:10.656213 containerd[1594]: time="2025-11-06T05:27:10.656198712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:10.656335 kubelet[2728]: E1106 05:27:10.656299 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:27:10.656423 kubelet[2728]: E1106 05:27:10.656337 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:27:10.658717 kubelet[2728]: E1106 05:27:10.658655 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2928,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gjsxf_calico-system(a5b9a4e3-c326-4625-87d3-4243511ed604): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:10.658974 containerd[1594]: time="2025-11-06T05:27:10.658886920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 05:27:10.943846 kubelet[2728]: E1106 05:27:10.943807 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:10.949225 kubelet[2728]: E1106 05:27:10.949179 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:10.951733 kubelet[2728]: E1106 05:27:10.951670 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-qh578" podUID="fbee8d7b-086c-45da-82fb-11a1baff350a" Nov 6 05:27:10.955781 kubelet[2728]: I1106 05:27:10.955723 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nwc4r" podStartSLOduration=40.955705601 podStartE2EDuration="40.955705601s" podCreationTimestamp="2025-11-06 05:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:27:10.954879999 +0000 UTC m=+46.271550924" watchObservedRunningTime="2025-11-06 05:27:10.955705601 +0000 UTC m=+46.272376536" Nov 6 05:27:10.969109 containerd[1594]: time="2025-11-06T05:27:10.969033332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:10.969612 kubelet[2728]: I1106 05:27:10.969422 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-m5td6" podStartSLOduration=40.969404349 podStartE2EDuration="40.969404349s" podCreationTimestamp="2025-11-06 05:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:27:10.968198043 +0000 UTC m=+46.284868978" watchObservedRunningTime="2025-11-06 05:27:10.969404349 +0000 UTC m=+46.286075274" Nov 6 05:27:10.970791 containerd[1594]: time="2025-11-06T05:27:10.970647564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 05:27:10.970791 containerd[1594]: time="2025-11-06T05:27:10.970714299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:10.970891 kubelet[2728]: E1106 05:27:10.970842 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:27:10.970891 kubelet[2728]: E1106 05:27:10.970875 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:27:10.971353 kubelet[2728]: E1106 05:27:10.971265 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czv8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54b64844bb-tt8w9_calico-system(08a93b65-3d61-4796-a494-3e40e8cba374): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:10.972446 containerd[1594]: time="2025-11-06T05:27:10.971673271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 05:27:10.973377 kubelet[2728]: E1106 05:27:10.973103 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b64844bb-tt8w9" podUID="08a93b65-3d61-4796-a494-3e40e8cba374" Nov 6 05:27:11.315969 containerd[1594]: time="2025-11-06T05:27:11.315780771Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:11.317169 containerd[1594]: time="2025-11-06T05:27:11.317113224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 05:27:11.317377 containerd[1594]: time="2025-11-06T05:27:11.317167275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:11.317409 kubelet[2728]: E1106 05:27:11.317368 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:27:11.317461 kubelet[2728]: E1106 05:27:11.317425 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:27:11.317643 kubelet[2728]: E1106 05:27:11.317598 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2928,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gjsxf_calico-system(a5b9a4e3-c326-4625-87d3-4243511ed604): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:11.318905 kubelet[2728]: E1106 05:27:11.318836 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gjsxf" podUID="a5b9a4e3-c326-4625-87d3-4243511ed604" Nov 6 05:27:11.635688 systemd-networkd[1497]: cali8ce5d0e4f2c: Gained IPv6LL Nov 6 05:27:11.698642 systemd-networkd[1497]: cali9d66a061b25: Gained IPv6LL Nov 6 05:27:11.763621 systemd-networkd[1497]: calif5bc5b76366: Gained IPv6LL Nov 6 05:27:11.763874 systemd-networkd[1497]: cali5233d816f5e: Gained IPv6LL Nov 6 05:27:11.953906 kubelet[2728]: E1106 05:27:11.952938 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:11.953906 kubelet[2728]: E1106 05:27:11.953411 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:11.953906 kubelet[2728]: E1106 05:27:11.953842 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gjsxf" podUID="a5b9a4e3-c326-4625-87d3-4243511ed604" Nov 6 05:27:11.954954 kubelet[2728]: E1106 05:27:11.954436 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b64844bb-tt8w9" podUID="08a93b65-3d61-4796-a494-3e40e8cba374" Nov 6 05:27:12.954565 kubelet[2728]: E1106 05:27:12.954524 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:12.955109 kubelet[2728]: E1106 05:27:12.954781 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:15.412134 systemd[1]: Started sshd@9-10.0.0.73:22-10.0.0.1:57772.service - OpenSSH per-connection server daemon (10.0.0.1:57772). Nov 6 05:27:15.468246 sshd[5014]: Accepted publickey for core from 10.0.0.1 port 57772 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:15.469871 sshd-session[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:15.474712 systemd-logind[1574]: New session 10 of user core. Nov 6 05:27:15.482619 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 05:27:15.570950 sshd[5019]: Connection closed by 10.0.0.1 port 57772 Nov 6 05:27:15.571381 sshd-session[5014]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:15.586259 systemd[1]: sshd@9-10.0.0.73:22-10.0.0.1:57772.service: Deactivated successfully. Nov 6 05:27:15.588765 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 05:27:15.590070 systemd-logind[1574]: Session 10 logged out. Waiting for processes to exit. Nov 6 05:27:15.594223 systemd[1]: Started sshd@10-10.0.0.73:22-10.0.0.1:57784.service - OpenSSH per-connection server daemon (10.0.0.1:57784). Nov 6 05:27:15.595292 systemd-logind[1574]: Removed session 10. Nov 6 05:27:15.658053 sshd[5034]: Accepted publickey for core from 10.0.0.1 port 57784 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:15.659702 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:15.665000 systemd-logind[1574]: New session 11 of user core. Nov 6 05:27:15.677647 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 05:27:15.805564 sshd[5037]: Connection closed by 10.0.0.1 port 57784 Nov 6 05:27:15.806048 sshd-session[5034]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:15.816544 systemd[1]: sshd@10-10.0.0.73:22-10.0.0.1:57784.service: Deactivated successfully. Nov 6 05:27:15.818657 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 05:27:15.819604 systemd-logind[1574]: Session 11 logged out. Waiting for processes to exit. Nov 6 05:27:15.822497 systemd[1]: Started sshd@11-10.0.0.73:22-10.0.0.1:57798.service - OpenSSH per-connection server daemon (10.0.0.1:57798). Nov 6 05:27:15.824550 systemd-logind[1574]: Removed session 11. Nov 6 05:27:15.880721 sshd[5048]: Accepted publickey for core from 10.0.0.1 port 57798 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:15.882451 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:15.887167 systemd-logind[1574]: New session 12 of user core. Nov 6 05:27:15.898617 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 05:27:16.057743 sshd[5051]: Connection closed by 10.0.0.1 port 57798 Nov 6 05:27:16.058556 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:16.067597 systemd[1]: sshd@11-10.0.0.73:22-10.0.0.1:57798.service: Deactivated successfully. Nov 6 05:27:16.070510 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 05:27:16.071662 systemd-logind[1574]: Session 12 logged out. Waiting for processes to exit. Nov 6 05:27:16.073944 systemd-logind[1574]: Removed session 12. Nov 6 05:27:20.800715 containerd[1594]: time="2025-11-06T05:27:20.800634886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:27:21.070820 systemd[1]: Started sshd@12-10.0.0.73:22-10.0.0.1:47064.service - OpenSSH per-connection server daemon (10.0.0.1:47064). Nov 6 05:27:21.133967 sshd[5074]: Accepted publickey for core from 10.0.0.1 port 47064 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:21.135657 sshd-session[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:21.140296 systemd-logind[1574]: New session 13 of user core. Nov 6 05:27:21.149657 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 05:27:21.236939 sshd[5077]: Connection closed by 10.0.0.1 port 47064 Nov 6 05:27:21.237362 sshd-session[5074]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:21.243092 systemd[1]: sshd@12-10.0.0.73:22-10.0.0.1:47064.service: Deactivated successfully. Nov 6 05:27:21.245435 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 05:27:21.246418 systemd-logind[1574]: Session 13 logged out. Waiting for processes to exit. Nov 6 05:27:21.248094 systemd-logind[1574]: Removed session 13. Nov 6 05:27:21.270691 containerd[1594]: time="2025-11-06T05:27:21.270616491Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:21.273847 containerd[1594]: time="2025-11-06T05:27:21.273765692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:27:21.273898 containerd[1594]: time="2025-11-06T05:27:21.273859417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:21.274153 kubelet[2728]: E1106 05:27:21.274096 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:27:21.274545 kubelet[2728]: E1106 05:27:21.274162 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:27:21.274545 kubelet[2728]: E1106 05:27:21.274347 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z94n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fd5b89b5c-rs8qz_calico-apiserver(0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:21.275649 kubelet[2728]: E1106 05:27:21.275601 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-rs8qz" podUID="0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a" Nov 6 05:27:21.794553 containerd[1594]: time="2025-11-06T05:27:21.794465642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 05:27:22.136372 containerd[1594]: time="2025-11-06T05:27:22.136204348Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:22.137519 containerd[1594]: time="2025-11-06T05:27:22.137427604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 05:27:22.137705 containerd[1594]: time="2025-11-06T05:27:22.137513475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:22.137743 kubelet[2728]: E1106 05:27:22.137687 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:27:22.137793 kubelet[2728]: E1106 05:27:22.137741 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:27:22.137924 kubelet[2728]: E1106 05:27:22.137866 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6e8622edc8ce4da48c424b544722cdf7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b8rss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b6bb984-wk547_calico-system(310afece-2da4-4aef-a7e0-89b082fc02bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:22.140010 containerd[1594]: time="2025-11-06T05:27:22.139968522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 05:27:22.505608 containerd[1594]: time="2025-11-06T05:27:22.505452781Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:22.525431 containerd[1594]: time="2025-11-06T05:27:22.525355021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 05:27:22.525518 containerd[1594]: time="2025-11-06T05:27:22.525450620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:22.525696 kubelet[2728]: E1106 05:27:22.525634 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:27:22.525696 kubelet[2728]: E1106 05:27:22.525684 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:27:22.526375 kubelet[2728]: E1106 05:27:22.525803 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8rss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b6bb984-wk547_calico-system(310afece-2da4-4aef-a7e0-89b082fc02bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:22.527048 kubelet[2728]: E1106 05:27:22.527000 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67b6bb984-wk547" podUID="310afece-2da4-4aef-a7e0-89b082fc02bd" Nov 6 05:27:22.795223 containerd[1594]: time="2025-11-06T05:27:22.794799016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:27:23.171907 containerd[1594]: time="2025-11-06T05:27:23.171724473Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:23.173299 containerd[1594]: time="2025-11-06T05:27:23.173249305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:27:23.173299 containerd[1594]: time="2025-11-06T05:27:23.173289921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:23.173616 kubelet[2728]: E1106 05:27:23.173556 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:27:23.173713 kubelet[2728]: E1106 05:27:23.173634 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:27:23.174372 containerd[1594]: time="2025-11-06T05:27:23.174066869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 05:27:23.175009 kubelet[2728]: E1106 05:27:23.174881 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65m2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fd5b89b5c-qh578_calico-apiserver(fbee8d7b-086c-45da-82fb-11a1baff350a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:23.177514 kubelet[2728]: E1106 05:27:23.177320 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-qh578" podUID="fbee8d7b-086c-45da-82fb-11a1baff350a" Nov 6 05:27:23.525647 containerd[1594]: time="2025-11-06T05:27:23.525538633Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:23.526844 containerd[1594]: time="2025-11-06T05:27:23.526795272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 05:27:23.526945 containerd[1594]: time="2025-11-06T05:27:23.526881934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:23.527104 kubelet[2728]: E1106 05:27:23.527041 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:27:23.527732 kubelet[2728]: E1106 05:27:23.527107 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:27:23.527732 kubelet[2728]: E1106 05:27:23.527351 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpnkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z8s46_calico-system(01f13972-7d24-4634-a6cb-bffbc3b083bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:23.527905 containerd[1594]: time="2025-11-06T05:27:23.527448166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 05:27:23.528985 kubelet[2728]: E1106 05:27:23.528910 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8s46" podUID="01f13972-7d24-4634-a6cb-bffbc3b083bb" Nov 6 05:27:23.909016 containerd[1594]: time="2025-11-06T05:27:23.908844677Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:23.910195 containerd[1594]: time="2025-11-06T05:27:23.910131151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 05:27:23.910195 containerd[1594]: time="2025-11-06T05:27:23.910165115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:23.910466 kubelet[2728]: E1106 05:27:23.910408 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:27:23.910571 kubelet[2728]: E1106 05:27:23.910493 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:27:23.910689 kubelet[2728]: E1106 05:27:23.910641 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czv8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54b64844bb-tt8w9_calico-system(08a93b65-3d61-4796-a494-3e40e8cba374): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:23.911881 kubelet[2728]: E1106 05:27:23.911839 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b64844bb-tt8w9" podUID="08a93b65-3d61-4796-a494-3e40e8cba374" Nov 6 05:27:25.795859 containerd[1594]: time="2025-11-06T05:27:25.795592353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 05:27:26.110816 containerd[1594]: time="2025-11-06T05:27:26.110630673Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:26.112566 containerd[1594]: time="2025-11-06T05:27:26.112524527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 05:27:26.112777 containerd[1594]: time="2025-11-06T05:27:26.112636326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:26.112840 kubelet[2728]: E1106 05:27:26.112787 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:27:26.113242 kubelet[2728]: E1106 05:27:26.112857 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:27:26.113242 kubelet[2728]: E1106 05:27:26.113025 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2928,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gjsxf_calico-system(a5b9a4e3-c326-4625-87d3-4243511ed604): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:26.115380 containerd[1594]: time="2025-11-06T05:27:26.115313430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 05:27:26.251843 systemd[1]: Started sshd@13-10.0.0.73:22-10.0.0.1:37668.service - OpenSSH per-connection server daemon (10.0.0.1:37668). Nov 6 05:27:26.318972 sshd[5100]: Accepted publickey for core from 10.0.0.1 port 37668 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:26.320650 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:26.325006 systemd-logind[1574]: New session 14 of user core. Nov 6 05:27:26.343610 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 05:27:26.441632 containerd[1594]: time="2025-11-06T05:27:26.441584653Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:26.486527 sshd[5107]: Connection closed by 10.0.0.1 port 37668 Nov 6 05:27:26.487130 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:26.488198 containerd[1594]: time="2025-11-06T05:27:26.488064226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 05:27:26.488198 containerd[1594]: time="2025-11-06T05:27:26.488130240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:26.488489 kubelet[2728]: E1106 05:27:26.488424 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:27:26.488690 kubelet[2728]: E1106 05:27:26.488669 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:27:26.488937 kubelet[2728]: E1106 05:27:26.488883 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2928,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gjsxf_calico-system(a5b9a4e3-c326-4625-87d3-4243511ed604): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:26.490206 kubelet[2728]: E1106 05:27:26.490164 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gjsxf" podUID="a5b9a4e3-c326-4625-87d3-4243511ed604" Nov 6 05:27:26.493535 systemd[1]: sshd@13-10.0.0.73:22-10.0.0.1:37668.service: Deactivated successfully. Nov 6 05:27:26.495828 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 05:27:26.496581 systemd-logind[1574]: Session 14 logged out. Waiting for processes to exit. Nov 6 05:27:26.498137 systemd-logind[1574]: Removed session 14. Nov 6 05:27:31.499544 systemd[1]: Started sshd@14-10.0.0.73:22-10.0.0.1:37678.service - OpenSSH per-connection server daemon (10.0.0.1:37678). Nov 6 05:27:31.560526 sshd[5125]: Accepted publickey for core from 10.0.0.1 port 37678 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:31.561910 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:31.566516 systemd-logind[1574]: New session 15 of user core. Nov 6 05:27:31.576603 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 05:27:31.680222 sshd[5128]: Connection closed by 10.0.0.1 port 37678 Nov 6 05:27:31.680593 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:31.686026 systemd[1]: sshd@14-10.0.0.73:22-10.0.0.1:37678.service: Deactivated successfully. Nov 6 05:27:31.689861 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 05:27:31.691423 systemd-logind[1574]: Session 15 logged out. Waiting for processes to exit. Nov 6 05:27:31.693024 systemd-logind[1574]: Removed session 15. Nov 6 05:27:32.795804 kubelet[2728]: E1106 05:27:32.795425 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-rs8qz" podUID="0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a" Nov 6 05:27:33.794683 kubelet[2728]: E1106 05:27:33.794598 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67b6bb984-wk547" podUID="310afece-2da4-4aef-a7e0-89b082fc02bd" Nov 6 05:27:34.996080 kubelet[2728]: E1106 05:27:34.996042 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:35.794250 kubelet[2728]: E1106 05:27:35.794188 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-qh578" podUID="fbee8d7b-086c-45da-82fb-11a1baff350a" Nov 6 05:27:36.692527 systemd[1]: Started sshd@15-10.0.0.73:22-10.0.0.1:33420.service - OpenSSH per-connection server daemon (10.0.0.1:33420). Nov 6 05:27:36.801454 sshd[5169]: Accepted publickey for core from 10.0.0.1 port 33420 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:36.803370 sshd-session[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:36.808022 systemd-logind[1574]: New session 16 of user core. Nov 6 05:27:36.818601 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 05:27:36.901053 sshd[5172]: Connection closed by 10.0.0.1 port 33420 Nov 6 05:27:36.901533 sshd-session[5169]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:36.915413 systemd[1]: sshd@15-10.0.0.73:22-10.0.0.1:33420.service: Deactivated successfully. Nov 6 05:27:36.917525 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 05:27:36.918321 systemd-logind[1574]: Session 16 logged out. Waiting for processes to exit. Nov 6 05:27:36.921554 systemd[1]: Started sshd@16-10.0.0.73:22-10.0.0.1:33430.service - OpenSSH per-connection server daemon (10.0.0.1:33430). Nov 6 05:27:36.922200 systemd-logind[1574]: Removed session 16. Nov 6 05:27:36.981543 sshd[5186]: Accepted publickey for core from 10.0.0.1 port 33430 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:36.983455 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:36.988635 systemd-logind[1574]: New session 17 of user core. Nov 6 05:27:36.998610 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 05:27:37.331716 sshd[5189]: Connection closed by 10.0.0.1 port 33430 Nov 6 05:27:37.332547 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:37.344762 systemd[1]: sshd@16-10.0.0.73:22-10.0.0.1:33430.service: Deactivated successfully. Nov 6 05:27:37.346867 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 05:27:37.347816 systemd-logind[1574]: Session 17 logged out. Waiting for processes to exit. Nov 6 05:27:37.350880 systemd[1]: Started sshd@17-10.0.0.73:22-10.0.0.1:33444.service - OpenSSH per-connection server daemon (10.0.0.1:33444). Nov 6 05:27:37.351526 systemd-logind[1574]: Removed session 17. Nov 6 05:27:37.425492 sshd[5201]: Accepted publickey for core from 10.0.0.1 port 33444 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:37.427007 sshd-session[5201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:37.431849 systemd-logind[1574]: New session 18 of user core. Nov 6 05:27:37.442649 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 05:27:38.047978 sshd[5204]: Connection closed by 10.0.0.1 port 33444 Nov 6 05:27:38.048580 sshd-session[5201]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:38.061201 systemd[1]: sshd@17-10.0.0.73:22-10.0.0.1:33444.service: Deactivated successfully. Nov 6 05:27:38.069123 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 05:27:38.070468 systemd-logind[1574]: Session 18 logged out. Waiting for processes to exit. Nov 6 05:27:38.074836 systemd[1]: Started sshd@18-10.0.0.73:22-10.0.0.1:33454.service - OpenSSH per-connection server daemon (10.0.0.1:33454). Nov 6 05:27:38.075977 systemd-logind[1574]: Removed session 18. Nov 6 05:27:38.129669 sshd[5224]: Accepted publickey for core from 10.0.0.1 port 33454 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:38.131014 sshd-session[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:38.135768 systemd-logind[1574]: New session 19 of user core. Nov 6 05:27:38.141618 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 05:27:38.355547 sshd[5227]: Connection closed by 10.0.0.1 port 33454 Nov 6 05:27:38.357334 sshd-session[5224]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:38.367617 systemd[1]: sshd@18-10.0.0.73:22-10.0.0.1:33454.service: Deactivated successfully. Nov 6 05:27:38.370205 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 05:27:38.373067 systemd-logind[1574]: Session 19 logged out. Waiting for processes to exit. Nov 6 05:27:38.374235 systemd[1]: Started sshd@19-10.0.0.73:22-10.0.0.1:33462.service - OpenSSH per-connection server daemon (10.0.0.1:33462). Nov 6 05:27:38.375879 systemd-logind[1574]: Removed session 19. Nov 6 05:27:38.428446 sshd[5238]: Accepted publickey for core from 10.0.0.1 port 33462 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:38.430231 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:38.435073 systemd-logind[1574]: New session 20 of user core. Nov 6 05:27:38.442609 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 05:27:38.530048 sshd[5241]: Connection closed by 10.0.0.1 port 33462 Nov 6 05:27:38.530459 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:38.535912 systemd-logind[1574]: Session 20 logged out. Waiting for processes to exit. Nov 6 05:27:38.536138 systemd[1]: sshd@19-10.0.0.73:22-10.0.0.1:33462.service: Deactivated successfully. Nov 6 05:27:38.538451 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 05:27:38.541552 systemd-logind[1574]: Removed session 20. Nov 6 05:27:38.796342 kubelet[2728]: E1106 05:27:38.796267 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8s46" podUID="01f13972-7d24-4634-a6cb-bffbc3b083bb" Nov 6 05:27:39.795086 kubelet[2728]: E1106 05:27:39.795027 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b64844bb-tt8w9" podUID="08a93b65-3d61-4796-a494-3e40e8cba374" Nov 6 05:27:40.795701 kubelet[2728]: E1106 05:27:40.795631 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gjsxf" podUID="a5b9a4e3-c326-4625-87d3-4243511ed604" Nov 6 05:27:43.551203 systemd[1]: Started sshd@20-10.0.0.73:22-10.0.0.1:33476.service - OpenSSH per-connection server daemon (10.0.0.1:33476). Nov 6 05:27:43.616412 sshd[5258]: Accepted publickey for core from 10.0.0.1 port 33476 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:43.618605 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:43.626277 systemd-logind[1574]: New session 21 of user core. Nov 6 05:27:43.633785 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 05:27:43.751152 sshd[5261]: Connection closed by 10.0.0.1 port 33476 Nov 6 05:27:43.751593 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:43.756931 systemd[1]: sshd@20-10.0.0.73:22-10.0.0.1:33476.service: Deactivated successfully. Nov 6 05:27:43.759672 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 05:27:43.760922 systemd-logind[1574]: Session 21 logged out. Waiting for processes to exit. Nov 6 05:27:43.763174 systemd-logind[1574]: Removed session 21. Nov 6 05:27:44.799255 containerd[1594]: time="2025-11-06T05:27:44.799171771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:27:45.176372 containerd[1594]: time="2025-11-06T05:27:45.176034220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:45.177500 containerd[1594]: time="2025-11-06T05:27:45.177456050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:27:45.179579 containerd[1594]: time="2025-11-06T05:27:45.177501267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:45.179989 kubelet[2728]: E1106 05:27:45.179861 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:27:45.179989 kubelet[2728]: E1106 05:27:45.179942 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:27:45.180740 kubelet[2728]: E1106 05:27:45.180658 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z94n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fd5b89b5c-rs8qz_calico-apiserver(0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:45.181871 kubelet[2728]: E1106 05:27:45.181815 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-rs8qz" podUID="0f73f8bc-c9f9-4d57-b6dd-3170e4d7175a" Nov 6 05:27:47.795501 containerd[1594]: time="2025-11-06T05:27:47.795280337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 05:27:48.129554 containerd[1594]: time="2025-11-06T05:27:48.129379928Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:48.130652 containerd[1594]: time="2025-11-06T05:27:48.130581443Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 05:27:48.130856 containerd[1594]: time="2025-11-06T05:27:48.130677598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:48.130928 kubelet[2728]: E1106 05:27:48.130880 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:27:48.131381 kubelet[2728]: E1106 05:27:48.130944 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:27:48.131381 kubelet[2728]: E1106 05:27:48.131202 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6e8622edc8ce4da48c424b544722cdf7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b8rss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b6bb984-wk547_calico-system(310afece-2da4-4aef-a7e0-89b082fc02bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:48.131584 containerd[1594]: time="2025-11-06T05:27:48.131558050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:27:48.498514 containerd[1594]: time="2025-11-06T05:27:48.498412469Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:48.499658 containerd[1594]: time="2025-11-06T05:27:48.499608053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:27:48.499850 containerd[1594]: time="2025-11-06T05:27:48.499693908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:48.499943 kubelet[2728]: E1106 05:27:48.499891 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:27:48.499993 kubelet[2728]: E1106 05:27:48.499954 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:27:48.500332 kubelet[2728]: E1106 05:27:48.500270 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65m2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fd5b89b5c-qh578_calico-apiserver(fbee8d7b-086c-45da-82fb-11a1baff350a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:48.500453 containerd[1594]: time="2025-11-06T05:27:48.500304303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 05:27:48.501608 kubelet[2728]: E1106 05:27:48.501516 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fd5b89b5c-qh578" podUID="fbee8d7b-086c-45da-82fb-11a1baff350a" Nov 6 05:27:48.764666 systemd[1]: Started sshd@21-10.0.0.73:22-10.0.0.1:58308.service - OpenSSH per-connection server daemon (10.0.0.1:58308). Nov 6 05:27:48.830067 containerd[1594]: time="2025-11-06T05:27:48.830021040Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:48.831192 containerd[1594]: time="2025-11-06T05:27:48.831159605Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 05:27:48.831248 containerd[1594]: time="2025-11-06T05:27:48.831159625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:48.831468 kubelet[2728]: E1106 05:27:48.831421 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:27:48.831535 kubelet[2728]: E1106 05:27:48.831501 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:27:48.831676 kubelet[2728]: E1106 05:27:48.831644 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8rss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b6bb984-wk547_calico-system(310afece-2da4-4aef-a7e0-89b082fc02bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:48.833081 kubelet[2728]: E1106 05:27:48.833047 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67b6bb984-wk547" podUID="310afece-2da4-4aef-a7e0-89b082fc02bd" Nov 6 05:27:48.834506 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 58308 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:48.836115 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:48.841424 systemd-logind[1574]: New session 22 of user core. Nov 6 05:27:48.850622 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 05:27:48.928504 sshd[5283]: Connection closed by 10.0.0.1 port 58308 Nov 6 05:27:48.929694 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:48.933745 systemd[1]: sshd@21-10.0.0.73:22-10.0.0.1:58308.service: Deactivated successfully. Nov 6 05:27:48.937386 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 05:27:48.940327 systemd-logind[1574]: Session 22 logged out. Waiting for processes to exit. Nov 6 05:27:48.943193 systemd-logind[1574]: Removed session 22. Nov 6 05:27:50.795541 containerd[1594]: time="2025-11-06T05:27:50.795154590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 05:27:51.152835 containerd[1594]: time="2025-11-06T05:27:51.152672836Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:51.154286 containerd[1594]: time="2025-11-06T05:27:51.154204207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 05:27:51.154577 kubelet[2728]: E1106 05:27:51.154519 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:27:51.154952 kubelet[2728]: E1106 05:27:51.154591 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:27:51.154952 kubelet[2728]: E1106 05:27:51.154786 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czv8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54b64844bb-tt8w9_calico-system(08a93b65-3d61-4796-a494-3e40e8cba374): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:51.155975 kubelet[2728]: E1106 05:27:51.155936 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54b64844bb-tt8w9" podUID="08a93b65-3d61-4796-a494-3e40e8cba374" Nov 6 05:27:51.168256 containerd[1594]: time="2025-11-06T05:27:51.154267648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:51.793688 kubelet[2728]: E1106 05:27:51.793644 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:51.793893 kubelet[2728]: E1106 05:27:51.793649 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:52.796066 kubelet[2728]: E1106 05:27:52.795977 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:52.798186 containerd[1594]: time="2025-11-06T05:27:52.797590248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 05:27:53.158096 containerd[1594]: time="2025-11-06T05:27:53.157926713Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:53.159313 containerd[1594]: time="2025-11-06T05:27:53.159251819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 05:27:53.159497 containerd[1594]: time="2025-11-06T05:27:53.159332393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:53.159584 kubelet[2728]: E1106 05:27:53.159512 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:27:53.159584 kubelet[2728]: E1106 05:27:53.159579 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:27:53.159846 kubelet[2728]: E1106 05:27:53.159767 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpnkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z8s46_calico-system(01f13972-7d24-4634-a6cb-bffbc3b083bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:53.160989 kubelet[2728]: E1106 05:27:53.160946 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8s46" podUID="01f13972-7d24-4634-a6cb-bffbc3b083bb" Nov 6 05:27:53.944293 systemd[1]: Started sshd@22-10.0.0.73:22-10.0.0.1:58310.service - OpenSSH per-connection server daemon (10.0.0.1:58310). Nov 6 05:27:54.003536 sshd[5298]: Accepted publickey for core from 10.0.0.1 port 58310 ssh2: RSA SHA256:hvL/1SQKBl4/SY5LMsAsBLqCMXNmHfCcuNAV0hHCw5c Nov 6 05:27:54.005239 sshd-session[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:27:54.009827 systemd-logind[1574]: New session 23 of user core. Nov 6 05:27:54.015620 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 05:27:54.092543 sshd[5301]: Connection closed by 10.0.0.1 port 58310 Nov 6 05:27:54.090757 sshd-session[5298]: pam_unix(sshd:session): session closed for user core Nov 6 05:27:54.098013 systemd[1]: sshd@22-10.0.0.73:22-10.0.0.1:58310.service: Deactivated successfully. Nov 6 05:27:54.103624 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 05:27:54.106949 systemd-logind[1574]: Session 23 logged out. Waiting for processes to exit. Nov 6 05:27:54.111376 systemd-logind[1574]: Removed session 23. Nov 6 05:27:54.794537 kubelet[2728]: E1106 05:27:54.794455 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 05:27:54.796706 containerd[1594]: time="2025-11-06T05:27:54.796641076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 05:27:55.181569 containerd[1594]: time="2025-11-06T05:27:55.181388312Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:55.183599 containerd[1594]: time="2025-11-06T05:27:55.183511988Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 05:27:55.183690 containerd[1594]: time="2025-11-06T05:27:55.183579055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:55.183912 kubelet[2728]: E1106 05:27:55.183851 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:27:55.183981 kubelet[2728]: E1106 05:27:55.183924 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:27:55.184167 kubelet[2728]: E1106 05:27:55.184111 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2928,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gjsxf_calico-system(a5b9a4e3-c326-4625-87d3-4243511ed604): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:55.186792 containerd[1594]: time="2025-11-06T05:27:55.186749885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 05:27:55.502685 containerd[1594]: time="2025-11-06T05:27:55.502609664Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:27:55.503703 containerd[1594]: time="2025-11-06T05:27:55.503645036Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 05:27:55.503703 containerd[1594]: time="2025-11-06T05:27:55.503695461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 6 05:27:55.503984 kubelet[2728]: E1106 05:27:55.503937 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:27:55.504057 kubelet[2728]: E1106 05:27:55.503998 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:27:55.504202 kubelet[2728]: E1106 05:27:55.504156 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2928,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gjsxf_calico-system(a5b9a4e3-c326-4625-87d3-4243511ed604): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 05:27:55.505466 kubelet[2728]: E1106 05:27:55.505389 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gjsxf" podUID="a5b9a4e3-c326-4625-87d3-4243511ed604"