Nov 5 04:53:15.302073 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 03:01:50 -00 2025 Nov 5 04:53:15.302114 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9a076e14dca937d9663502c090e1ff4931f585a3752c3aa4c87feb67d6e5a465 Nov 5 04:53:15.302131 kernel: BIOS-provided physical RAM map: Nov 5 04:53:15.302142 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 5 04:53:15.302149 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 5 04:53:15.302155 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 5 04:53:15.302164 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 5 04:53:15.302171 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 5 04:53:15.302180 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 5 04:53:15.302187 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 5 04:53:15.302194 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Nov 5 04:53:15.302203 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 5 04:53:15.302210 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 5 04:53:15.302217 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 5 04:53:15.302225 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 5 04:53:15.302233 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 5 04:53:15.302245 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 5 04:53:15.302252 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 5 04:53:15.302260 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 5 04:53:15.302267 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 5 04:53:15.302274 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 5 04:53:15.302282 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 5 04:53:15.302289 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 5 04:53:15.302297 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 04:53:15.302304 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 5 04:53:15.302311 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 04:53:15.302334 kernel: NX (Execute Disable) protection: active Nov 5 04:53:15.302341 kernel: APIC: Static calls initialized Nov 5 04:53:15.302348 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Nov 5 04:53:15.302356 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Nov 5 04:53:15.302363 kernel: extended physical RAM map: Nov 5 04:53:15.302370 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 5 04:53:15.302378 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 5 04:53:15.302385 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 5 04:53:15.302392 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Nov 5 04:53:15.302400 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 5 04:53:15.302407 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 5 04:53:15.302417 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 5 04:53:15.302424 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Nov 5 04:53:15.302432 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Nov 5 04:53:15.302443 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Nov 5 04:53:15.302452 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Nov 5 04:53:15.302460 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Nov 5 04:53:15.302468 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 5 04:53:15.302475 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 5 04:53:15.302483 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 5 04:53:15.302491 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 5 04:53:15.302498 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 5 04:53:15.302506 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 5 04:53:15.302514 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 5 04:53:15.302524 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 5 04:53:15.302531 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 5 04:53:15.302539 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 5 04:53:15.302547 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 5 04:53:15.302554 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 5 04:53:15.302562 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 04:53:15.302569 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 5 04:53:15.302577 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 04:53:15.302587 kernel: efi: EFI v2.7 by EDK II Nov 5 04:53:15.302595 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Nov 5 04:53:15.302602 kernel: random: crng init done Nov 5 04:53:15.302614 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Nov 5 04:53:15.302622 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Nov 5 04:53:15.302631 kernel: secureboot: Secure boot disabled Nov 5 04:53:15.302639 kernel: SMBIOS 2.8 present. Nov 5 04:53:15.302647 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 5 04:53:15.302655 kernel: DMI: Memory slots populated: 1/1 Nov 5 04:53:15.302665 kernel: Hypervisor detected: KVM Nov 5 04:53:15.302681 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 5 04:53:15.302704 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 04:53:15.302715 kernel: kvm-clock: using sched offset of 5193322599 cycles Nov 5 04:53:15.302725 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 04:53:15.302742 kernel: tsc: Detected 2794.750 MHz processor Nov 5 04:53:15.302752 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 04:53:15.302763 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 04:53:15.302774 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 5 04:53:15.302785 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 5 04:53:15.302796 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 04:53:15.302808 kernel: Using GB pages for direct mapping Nov 5 04:53:15.302822 kernel: ACPI: Early table checksum verification disabled Nov 5 04:53:15.302833 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 5 04:53:15.302844 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 5 04:53:15.302856 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:53:15.302867 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:53:15.302878 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 5 04:53:15.302889 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:53:15.302904 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:53:15.302915 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:53:15.302926 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:53:15.302937 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 5 04:53:15.302947 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 5 04:53:15.302958 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 5 04:53:15.302969 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 5 04:53:15.302984 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 5 04:53:15.302996 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 5 04:53:15.303006 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 5 04:53:15.303017 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 5 04:53:15.303029 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 5 04:53:15.303040 kernel: No NUMA configuration found Nov 5 04:53:15.303051 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Nov 5 04:53:15.303062 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Nov 5 04:53:15.303077 kernel: Zone ranges: Nov 5 04:53:15.303088 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 04:53:15.303099 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Nov 5 04:53:15.303110 kernel: Normal empty Nov 5 04:53:15.303143 kernel: Device empty Nov 5 04:53:15.303154 kernel: Movable zone start for each node Nov 5 04:53:15.303165 kernel: Early memory node ranges Nov 5 04:53:15.303176 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 5 04:53:15.303196 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 5 04:53:15.303207 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 5 04:53:15.303218 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Nov 5 04:53:15.303229 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Nov 5 04:53:15.303240 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Nov 5 04:53:15.303251 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Nov 5 04:53:15.303262 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Nov 5 04:53:15.303279 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Nov 5 04:53:15.303291 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 04:53:15.303310 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 5 04:53:15.303339 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 5 04:53:15.303350 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 04:53:15.303361 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Nov 5 04:53:15.303373 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Nov 5 04:53:15.303384 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 5 04:53:15.303396 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 5 04:53:15.303408 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Nov 5 04:53:15.303424 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 04:53:15.303447 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 04:53:15.303459 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 04:53:15.303471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 04:53:15.303486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 04:53:15.303498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 04:53:15.303509 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 04:53:15.303521 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 04:53:15.303532 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 04:53:15.303544 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 04:53:15.303555 kernel: TSC deadline timer available Nov 5 04:53:15.303570 kernel: CPU topo: Max. logical packages: 1 Nov 5 04:53:15.303581 kernel: CPU topo: Max. logical dies: 1 Nov 5 04:53:15.303592 kernel: CPU topo: Max. dies per package: 1 Nov 5 04:53:15.303604 kernel: CPU topo: Max. threads per core: 1 Nov 5 04:53:15.303615 kernel: CPU topo: Num. cores per package: 4 Nov 5 04:53:15.303626 kernel: CPU topo: Num. threads per package: 4 Nov 5 04:53:15.303638 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 5 04:53:15.303652 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 04:53:15.303664 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 5 04:53:15.303676 kernel: kvm-guest: setup PV sched yield Nov 5 04:53:15.303687 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 5 04:53:15.303698 kernel: Booting paravirtualized kernel on KVM Nov 5 04:53:15.303710 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 04:53:15.303722 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 5 04:53:15.303733 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 5 04:53:15.303749 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 5 04:53:15.303760 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 5 04:53:15.303771 kernel: kvm-guest: PV spinlocks enabled Nov 5 04:53:15.303783 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 04:53:15.303802 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9a076e14dca937d9663502c090e1ff4931f585a3752c3aa4c87feb67d6e5a465 Nov 5 04:53:15.303815 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 04:53:15.303831 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 04:53:15.303842 kernel: Fallback order for Node 0: 0 Nov 5 04:53:15.303854 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Nov 5 04:53:15.303866 kernel: Policy zone: DMA32 Nov 5 04:53:15.303878 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 04:53:15.303889 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 5 04:53:15.303900 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 04:53:15.303913 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 04:53:15.303923 kernel: Dynamic Preempt: voluntary Nov 5 04:53:15.303934 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 04:53:15.303946 kernel: rcu: RCU event tracing is enabled. Nov 5 04:53:15.303958 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 5 04:53:15.303970 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 04:53:15.303982 kernel: Rude variant of Tasks RCU enabled. Nov 5 04:53:15.303994 kernel: Tracing variant of Tasks RCU enabled. Nov 5 04:53:15.304009 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 04:53:15.304021 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 5 04:53:15.304036 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 04:53:15.304049 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 04:53:15.304060 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 04:53:15.304071 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 5 04:53:15.304082 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 04:53:15.304097 kernel: Console: colour dummy device 80x25 Nov 5 04:53:15.304108 kernel: printk: legacy console [ttyS0] enabled Nov 5 04:53:15.304131 kernel: ACPI: Core revision 20240827 Nov 5 04:53:15.304142 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 04:53:15.304151 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 04:53:15.304159 kernel: x2apic enabled Nov 5 04:53:15.304167 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 04:53:15.304178 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 5 04:53:15.304187 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 5 04:53:15.304195 kernel: kvm-guest: setup PV IPIs Nov 5 04:53:15.304203 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 04:53:15.304213 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 5 04:53:15.304224 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 5 04:53:15.304236 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 5 04:53:15.304251 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 5 04:53:15.304263 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 5 04:53:15.304274 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 04:53:15.304286 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 04:53:15.304298 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 04:53:15.304310 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 5 04:53:15.304338 kernel: active return thunk: retbleed_return_thunk Nov 5 04:53:15.304353 kernel: RETBleed: Mitigation: untrained return thunk Nov 5 04:53:15.304369 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 04:53:15.304381 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 04:53:15.304393 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 5 04:53:15.304405 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 5 04:53:15.304417 kernel: active return thunk: srso_return_thunk Nov 5 04:53:15.304429 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 5 04:53:15.304444 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 04:53:15.304456 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 04:53:15.304467 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 04:53:15.304479 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 04:53:15.304491 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 5 04:53:15.304503 kernel: Freeing SMP alternatives memory: 32K Nov 5 04:53:15.304514 kernel: pid_max: default: 32768 minimum: 301 Nov 5 04:53:15.304528 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 04:53:15.304540 kernel: landlock: Up and running. Nov 5 04:53:15.304551 kernel: SELinux: Initializing. Nov 5 04:53:15.304563 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 04:53:15.304575 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 04:53:15.304587 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 5 04:53:15.304599 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 5 04:53:15.304613 kernel: ... version: 0 Nov 5 04:53:15.304624 kernel: ... bit width: 48 Nov 5 04:53:15.304636 kernel: ... generic registers: 6 Nov 5 04:53:15.304648 kernel: ... value mask: 0000ffffffffffff Nov 5 04:53:15.304659 kernel: ... max period: 00007fffffffffff Nov 5 04:53:15.304671 kernel: ... fixed-purpose events: 0 Nov 5 04:53:15.304682 kernel: ... event mask: 000000000000003f Nov 5 04:53:15.304696 kernel: signal: max sigframe size: 1776 Nov 5 04:53:15.304708 kernel: rcu: Hierarchical SRCU implementation. Nov 5 04:53:15.304720 kernel: rcu: Max phase no-delay instances is 400. Nov 5 04:53:15.304734 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 04:53:15.304746 kernel: smp: Bringing up secondary CPUs ... Nov 5 04:53:15.304758 kernel: smpboot: x86: Booting SMP configuration: Nov 5 04:53:15.304769 kernel: .... node #0, CPUs: #1 #2 #3 Nov 5 04:53:15.304781 kernel: smp: Brought up 1 node, 4 CPUs Nov 5 04:53:15.304795 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 5 04:53:15.304807 kernel: Memory: 2441100K/2565800K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15348K init, 2696K bss, 118764K reserved, 0K cma-reserved) Nov 5 04:53:15.304819 kernel: devtmpfs: initialized Nov 5 04:53:15.304830 kernel: x86/mm: Memory block size: 128MB Nov 5 04:53:15.304842 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 5 04:53:15.304854 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 5 04:53:15.304866 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Nov 5 04:53:15.304880 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 5 04:53:15.304892 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Nov 5 04:53:15.304904 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 5 04:53:15.304915 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 04:53:15.304927 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 5 04:53:15.304939 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 04:53:15.304953 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 04:53:15.304965 kernel: audit: initializing netlink subsys (disabled) Nov 5 04:53:15.304976 kernel: audit: type=2000 audit(1762318392.380:1): state=initialized audit_enabled=0 res=1 Nov 5 04:53:15.304988 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 04:53:15.305002 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 04:53:15.305015 kernel: cpuidle: using governor menu Nov 5 04:53:15.305028 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 04:53:15.305040 kernel: dca service started, version 1.12.1 Nov 5 04:53:15.305055 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 5 04:53:15.305067 kernel: PCI: Using configuration type 1 for base access Nov 5 04:53:15.305079 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 04:53:15.305091 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 04:53:15.305102 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 04:53:15.305114 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 04:53:15.305134 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 04:53:15.305149 kernel: ACPI: Added _OSI(Module Device) Nov 5 04:53:15.305161 kernel: ACPI: Added _OSI(Processor Device) Nov 5 04:53:15.305172 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 04:53:15.305184 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 04:53:15.305195 kernel: ACPI: Interpreter enabled Nov 5 04:53:15.305207 kernel: ACPI: PM: (supports S0 S3 S5) Nov 5 04:53:15.305218 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 04:53:15.305233 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 04:53:15.305245 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 04:53:15.305257 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 5 04:53:15.305268 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 04:53:15.305583 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 04:53:15.305808 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 5 04:53:15.306045 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 5 04:53:15.306063 kernel: PCI host bridge to bus 0000:00 Nov 5 04:53:15.306315 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 04:53:15.306558 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 04:53:15.306773 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 04:53:15.306987 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 5 04:53:15.307224 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 5 04:53:15.307455 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 5 04:53:15.307668 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 04:53:15.307916 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 5 04:53:15.308178 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 5 04:53:15.308435 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 5 04:53:15.308685 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 5 04:53:15.308910 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 5 04:53:15.309136 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 04:53:15.309435 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 04:53:15.309669 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 5 04:53:15.309900 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 5 04:53:15.310138 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 5 04:53:15.310392 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 04:53:15.310620 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 5 04:53:15.310840 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 5 04:53:15.311060 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 5 04:53:15.311314 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 04:53:15.311557 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 5 04:53:15.311778 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 5 04:53:15.311996 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 5 04:53:15.312230 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 5 04:53:15.312491 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 5 04:53:15.312729 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 5 04:53:15.312980 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 5 04:53:15.313422 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 5 04:53:15.313923 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 5 04:53:15.314206 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 5 04:53:15.314465 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 5 04:53:15.314483 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 04:53:15.314495 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 04:53:15.314507 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 04:53:15.314519 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 04:53:15.314531 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 5 04:53:15.314547 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 5 04:53:15.314559 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 5 04:53:15.314570 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 5 04:53:15.314583 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 5 04:53:15.314594 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 5 04:53:15.314606 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 5 04:53:15.314618 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 5 04:53:15.314632 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 5 04:53:15.314644 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 5 04:53:15.314656 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 5 04:53:15.314668 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 5 04:53:15.314679 kernel: iommu: Default domain type: Translated Nov 5 04:53:15.314691 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 04:53:15.314702 kernel: efivars: Registered efivars operations Nov 5 04:53:15.314717 kernel: PCI: Using ACPI for IRQ routing Nov 5 04:53:15.314729 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 04:53:15.314741 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 5 04:53:15.314753 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Nov 5 04:53:15.314765 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Nov 5 04:53:15.314777 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Nov 5 04:53:15.314789 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Nov 5 04:53:15.314801 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Nov 5 04:53:15.314817 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Nov 5 04:53:15.314829 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Nov 5 04:53:15.315060 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 5 04:53:15.315303 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 5 04:53:15.315546 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 04:53:15.315564 kernel: vgaarb: loaded Nov 5 04:53:15.315582 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 04:53:15.315594 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 04:53:15.315606 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 04:53:15.315618 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 04:53:15.315629 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 04:53:15.315641 kernel: pnp: PnP ACPI init Nov 5 04:53:15.315897 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 5 04:53:15.315921 kernel: pnp: PnP ACPI: found 6 devices Nov 5 04:53:15.315934 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 04:53:15.315947 kernel: NET: Registered PF_INET protocol family Nov 5 04:53:15.315959 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 04:53:15.315971 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 04:53:15.315984 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 04:53:15.315999 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 04:53:15.316011 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 04:53:15.316023 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 04:53:15.316036 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 04:53:15.316048 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 04:53:15.316060 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 04:53:15.316072 kernel: NET: Registered PF_XDP protocol family Nov 5 04:53:15.316306 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 5 04:53:15.316588 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 5 04:53:15.316820 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 04:53:15.317029 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 04:53:15.317257 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 04:53:15.317489 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 5 04:53:15.317701 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 5 04:53:15.317919 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 5 04:53:15.317938 kernel: PCI: CLS 0 bytes, default 64 Nov 5 04:53:15.317952 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 5 04:53:15.317969 kernel: Initialise system trusted keyrings Nov 5 04:53:15.317984 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 04:53:15.317996 kernel: Key type asymmetric registered Nov 5 04:53:15.318008 kernel: Asymmetric key parser 'x509' registered Nov 5 04:53:15.318021 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 04:53:15.318034 kernel: io scheduler mq-deadline registered Nov 5 04:53:15.318047 kernel: io scheduler kyber registered Nov 5 04:53:15.318059 kernel: io scheduler bfq registered Nov 5 04:53:15.318075 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 04:53:15.318089 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 5 04:53:15.318103 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 5 04:53:15.318115 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 5 04:53:15.318139 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 04:53:15.318152 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 04:53:15.318164 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 04:53:15.318180 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 04:53:15.318193 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 04:53:15.318449 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 5 04:53:15.318470 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 04:53:15.318693 kernel: rtc_cmos 00:04: registered as rtc0 Nov 5 04:53:15.318922 kernel: rtc_cmos 00:04: setting system clock to 2025-11-05T04:53:13 UTC (1762318393) Nov 5 04:53:15.319169 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 5 04:53:15.319189 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 5 04:53:15.319202 kernel: efifb: probing for efifb Nov 5 04:53:15.319215 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 5 04:53:15.319227 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 5 04:53:15.319240 kernel: efifb: scrolling: redraw Nov 5 04:53:15.319253 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 5 04:53:15.319270 kernel: Console: switching to colour frame buffer device 160x50 Nov 5 04:53:15.319283 kernel: fb0: EFI VGA frame buffer device Nov 5 04:53:15.319296 kernel: pstore: Using crash dump compression: deflate Nov 5 04:53:15.319309 kernel: pstore: Registered efi_pstore as persistent store backend Nov 5 04:53:15.319340 kernel: NET: Registered PF_INET6 protocol family Nov 5 04:53:15.319353 kernel: Segment Routing with IPv6 Nov 5 04:53:15.319366 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 04:53:15.319378 kernel: NET: Registered PF_PACKET protocol family Nov 5 04:53:15.319395 kernel: Key type dns_resolver registered Nov 5 04:53:15.319407 kernel: IPI shorthand broadcast: enabled Nov 5 04:53:15.319418 kernel: sched_clock: Marking stable (2206002409, 315684003)->(2616624350, -94937938) Nov 5 04:53:15.319430 kernel: registered taskstats version 1 Nov 5 04:53:15.319442 kernel: Loading compiled-in X.509 certificates Nov 5 04:53:15.319454 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: cfd469c5acf75e2b7be33dd554bbf88cbfe73c93' Nov 5 04:53:15.319466 kernel: Demotion targets for Node 0: null Nov 5 04:53:15.319480 kernel: Key type .fscrypt registered Nov 5 04:53:15.319492 kernel: Key type fscrypt-provisioning registered Nov 5 04:53:15.319504 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 04:53:15.319516 kernel: ima: Allocated hash algorithm: sha1 Nov 5 04:53:15.319528 kernel: ima: No architecture policies found Nov 5 04:53:15.319540 kernel: clk: Disabling unused clocks Nov 5 04:53:15.319552 kernel: Freeing unused kernel image (initmem) memory: 15348K Nov 5 04:53:15.319567 kernel: Write protecting the kernel read-only data: 45056k Nov 5 04:53:15.319579 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 5 04:53:15.319591 kernel: Run /init as init process Nov 5 04:53:15.319603 kernel: with arguments: Nov 5 04:53:15.319614 kernel: /init Nov 5 04:53:15.319626 kernel: with environment: Nov 5 04:53:15.319638 kernel: HOME=/ Nov 5 04:53:15.319650 kernel: TERM=linux Nov 5 04:53:15.319664 kernel: SCSI subsystem initialized Nov 5 04:53:15.319677 kernel: libata version 3.00 loaded. Nov 5 04:53:15.319904 kernel: ahci 0000:00:1f.2: version 3.0 Nov 5 04:53:15.319921 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 5 04:53:15.320222 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 5 04:53:15.320474 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 5 04:53:15.320673 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 5 04:53:15.320885 kernel: scsi host0: ahci Nov 5 04:53:15.321073 kernel: scsi host1: ahci Nov 5 04:53:15.321279 kernel: scsi host2: ahci Nov 5 04:53:15.321504 kernel: scsi host3: ahci Nov 5 04:53:15.321708 kernel: scsi host4: ahci Nov 5 04:53:15.321893 kernel: scsi host5: ahci Nov 5 04:53:15.321906 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 5 04:53:15.321916 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 5 04:53:15.321925 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 5 04:53:15.321934 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 5 04:53:15.321947 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 5 04:53:15.321956 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 5 04:53:15.321966 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 5 04:53:15.321975 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 5 04:53:15.321986 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 5 04:53:15.321995 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 5 04:53:15.322003 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 04:53:15.322012 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 5 04:53:15.322023 kernel: ata3.00: applying bridge limits Nov 5 04:53:15.322032 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 5 04:53:15.322041 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 5 04:53:15.322050 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 04:53:15.322060 kernel: ata3.00: configured for UDMA/100 Nov 5 04:53:15.322285 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 5 04:53:15.322514 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 5 04:53:15.322691 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 5 04:53:15.322703 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 04:53:15.322712 kernel: GPT:16515071 != 27000831 Nov 5 04:53:15.322720 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 04:53:15.322729 kernel: GPT:16515071 != 27000831 Nov 5 04:53:15.322738 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 04:53:15.322750 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 04:53:15.322942 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 5 04:53:15.322954 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 5 04:53:15.323154 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 5 04:53:15.323166 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 04:53:15.323175 kernel: device-mapper: uevent: version 1.0.3 Nov 5 04:53:15.323188 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 04:53:15.323197 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 04:53:15.323206 kernel: raid6: avx2x4 gen() 29468 MB/s Nov 5 04:53:15.323215 kernel: raid6: avx2x2 gen() 30017 MB/s Nov 5 04:53:15.323224 kernel: raid6: avx2x1 gen() 24625 MB/s Nov 5 04:53:15.323232 kernel: raid6: using algorithm avx2x2 gen() 30017 MB/s Nov 5 04:53:15.323241 kernel: raid6: .... xor() 18346 MB/s, rmw enabled Nov 5 04:53:15.323250 kernel: raid6: using avx2x2 recovery algorithm Nov 5 04:53:15.323261 kernel: xor: automatically using best checksumming function avx Nov 5 04:53:15.323270 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 04:53:15.323279 kernel: BTRFS: device fsid 8119ddf0-7fda-4d84-ad78-3566733896c1 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (181) Nov 5 04:53:15.323289 kernel: BTRFS info (device dm-0): first mount of filesystem 8119ddf0-7fda-4d84-ad78-3566733896c1 Nov 5 04:53:15.323297 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 04:53:15.323306 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 04:53:15.323334 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 04:53:15.323348 kernel: loop: module loaded Nov 5 04:53:15.323360 kernel: loop0: detected capacity change from 0 to 100136 Nov 5 04:53:15.323372 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 04:53:15.323385 systemd[1]: Successfully made /usr/ read-only. Nov 5 04:53:15.323401 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 04:53:15.323415 systemd[1]: Detected virtualization kvm. Nov 5 04:53:15.323424 systemd[1]: Detected architecture x86-64. Nov 5 04:53:15.323433 systemd[1]: Running in initrd. Nov 5 04:53:15.323442 systemd[1]: No hostname configured, using default hostname. Nov 5 04:53:15.323452 systemd[1]: Hostname set to . Nov 5 04:53:15.323461 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 04:53:15.323470 systemd[1]: Queued start job for default target initrd.target. Nov 5 04:53:15.323481 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 04:53:15.323491 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 04:53:15.323501 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 04:53:15.323511 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 04:53:15.323521 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 04:53:15.323531 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 04:53:15.323542 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 04:53:15.323552 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 04:53:15.323561 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 04:53:15.323571 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 04:53:15.323580 systemd[1]: Reached target paths.target - Path Units. Nov 5 04:53:15.323589 systemd[1]: Reached target slices.target - Slice Units. Nov 5 04:53:15.323600 systemd[1]: Reached target swap.target - Swaps. Nov 5 04:53:15.323610 systemd[1]: Reached target timers.target - Timer Units. Nov 5 04:53:15.323619 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 04:53:15.323628 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 04:53:15.323638 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 04:53:15.323647 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 04:53:15.323656 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 04:53:15.323668 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 04:53:15.323678 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 04:53:15.323687 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 04:53:15.323696 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 04:53:15.323706 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 04:53:15.323715 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 04:53:15.323725 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 04:53:15.323737 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 04:53:15.323746 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 04:53:15.323755 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 04:53:15.323765 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 04:53:15.323774 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 04:53:15.323786 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 04:53:15.323796 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 04:53:15.323805 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 04:53:15.323815 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 04:53:15.323863 systemd-journald[317]: Collecting audit messages is disabled. Nov 5 04:53:15.323888 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 04:53:15.323898 systemd-journald[317]: Journal started Nov 5 04:53:15.323920 systemd-journald[317]: Runtime Journal (/run/log/journal/685e163b44554e5b8cbf07e190e1d71f) is 6M, max 48.1M, 42M free. Nov 5 04:53:15.369831 kernel: Bridge firewalling registered Nov 5 04:53:15.370011 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 04:53:15.370126 systemd-modules-load[320]: Inserted module 'br_netfilter' Nov 5 04:53:15.370851 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 04:53:15.377493 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 04:53:15.379441 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 04:53:15.380026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 04:53:15.382540 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 04:53:15.404092 systemd-tmpfiles[334]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 04:53:15.407796 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:53:15.411835 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 04:53:15.416248 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 04:53:15.420259 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 04:53:15.422605 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 04:53:15.425167 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 04:53:15.451601 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 04:53:15.457354 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 04:53:15.485841 dracut-cmdline[363]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9a076e14dca937d9663502c090e1ff4931f585a3752c3aa4c87feb67d6e5a465 Nov 5 04:53:15.488072 systemd-resolved[347]: Positive Trust Anchors: Nov 5 04:53:15.488080 systemd-resolved[347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 04:53:15.488085 systemd-resolved[347]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 04:53:15.488127 systemd-resolved[347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 04:53:15.521210 systemd-resolved[347]: Defaulting to hostname 'linux'. Nov 5 04:53:15.523014 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 04:53:15.523165 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 04:53:15.600357 kernel: Loading iSCSI transport class v2.0-870. Nov 5 04:53:15.615346 kernel: iscsi: registered transport (tcp) Nov 5 04:53:15.661357 kernel: iscsi: registered transport (qla4xxx) Nov 5 04:53:15.661390 kernel: QLogic iSCSI HBA Driver Nov 5 04:53:15.688175 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 04:53:15.716035 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 04:53:15.716534 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 04:53:15.778370 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 04:53:15.783033 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 04:53:15.786316 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 04:53:15.842474 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 04:53:15.844176 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 04:53:15.883203 systemd-udevd[598]: Using default interface naming scheme 'v257'. Nov 5 04:53:15.898533 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 04:53:15.904506 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 04:53:15.932744 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 04:53:15.936942 dracut-pre-trigger[674]: rd.md=0: removing MD RAID activation Nov 5 04:53:15.939409 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 04:53:15.965194 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 04:53:15.970312 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 04:53:15.997902 systemd-networkd[713]: lo: Link UP Nov 5 04:53:15.997915 systemd-networkd[713]: lo: Gained carrier Nov 5 04:53:15.998808 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 04:53:16.001007 systemd[1]: Reached target network.target - Network. Nov 5 04:53:16.074182 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 04:53:16.078486 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 04:53:16.138722 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 04:53:16.168921 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 04:53:16.180337 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 5 04:53:16.187346 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 04:53:16.208157 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 04:53:16.220867 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 04:53:16.272043 systemd-networkd[713]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 04:53:16.284895 kernel: AES CTR mode by8 optimization enabled Nov 5 04:53:16.272740 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 04:53:16.277470 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 04:53:16.277842 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 04:53:16.278200 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:53:16.279167 systemd-networkd[713]: eth0: Link UP Nov 5 04:53:16.279467 systemd-networkd[713]: eth0: Gained carrier Nov 5 04:53:16.279484 systemd-networkd[713]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 04:53:16.281371 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 04:53:16.312474 disk-uuid[813]: Primary Header is updated. Nov 5 04:53:16.312474 disk-uuid[813]: Secondary Entries is updated. Nov 5 04:53:16.312474 disk-uuid[813]: Secondary Header is updated. Nov 5 04:53:16.288624 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 04:53:16.294827 systemd-networkd[713]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 04:53:16.314800 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 04:53:16.314958 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:53:16.322592 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 04:53:16.375370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:53:16.417994 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 04:53:16.420529 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 04:53:16.423792 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 04:53:16.425943 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 04:53:16.431168 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 04:53:16.445109 systemd-resolved[347]: Detected conflict on linux IN A 10.0.0.99 Nov 5 04:53:16.445125 systemd-resolved[347]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Nov 5 04:53:16.467759 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 04:53:16.712401 systemd-resolved[347]: Detected conflict on linux8 IN A 10.0.0.99 Nov 5 04:53:16.712420 systemd-resolved[347]: Hostname conflict, changing published hostname from 'linux8' to 'linux16'. Nov 5 04:53:17.387712 disk-uuid[827]: Warning: The kernel is still using the old partition table. Nov 5 04:53:17.387712 disk-uuid[827]: The new table will be used at the next reboot or after you Nov 5 04:53:17.387712 disk-uuid[827]: run partprobe(8) or kpartx(8) Nov 5 04:53:17.387712 disk-uuid[827]: The operation has completed successfully. Nov 5 04:53:17.398261 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 04:53:17.399750 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 04:53:17.405511 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 04:53:17.449663 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (872) Nov 5 04:53:17.449720 kernel: BTRFS info (device vda6): first mount of filesystem e7137982-ac37-41c2-8fd6-d0cf0728ebd4 Nov 5 04:53:17.449732 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 04:53:17.455040 kernel: BTRFS info (device vda6): turning on async discard Nov 5 04:53:17.455077 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 04:53:17.462351 kernel: BTRFS info (device vda6): last unmount of filesystem e7137982-ac37-41c2-8fd6-d0cf0728ebd4 Nov 5 04:53:17.463398 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 04:53:17.467680 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 04:53:17.753499 systemd-networkd[713]: eth0: Gained IPv6LL Nov 5 04:53:17.784165 ignition[891]: Ignition 2.22.0 Nov 5 04:53:17.784181 ignition[891]: Stage: fetch-offline Nov 5 04:53:17.784238 ignition[891]: no configs at "/usr/lib/ignition/base.d" Nov 5 04:53:17.784250 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:53:17.785184 ignition[891]: parsed url from cmdline: "" Nov 5 04:53:17.785189 ignition[891]: no config URL provided Nov 5 04:53:17.785195 ignition[891]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 04:53:17.785210 ignition[891]: no config at "/usr/lib/ignition/user.ign" Nov 5 04:53:17.785277 ignition[891]: op(1): [started] loading QEMU firmware config module Nov 5 04:53:17.785283 ignition[891]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 5 04:53:17.796481 ignition[891]: op(1): [finished] loading QEMU firmware config module Nov 5 04:53:17.875960 ignition[891]: parsing config with SHA512: aba17e4b8d830fbc45f5437b08d40b206293da4d4bb0ad18041905e7232e13e5fb32e7d0354662dbc4381f7bc1cd88b57c07a54cac41c56a74a72b03cbd3f2ae Nov 5 04:53:17.893185 unknown[891]: fetched base config from "system" Nov 5 04:53:17.893200 unknown[891]: fetched user config from "qemu" Nov 5 04:53:17.893620 ignition[891]: fetch-offline: fetch-offline passed Nov 5 04:53:17.893703 ignition[891]: Ignition finished successfully Nov 5 04:53:17.901420 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 04:53:17.901717 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 04:53:17.902720 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 04:53:17.971913 ignition[902]: Ignition 2.22.0 Nov 5 04:53:17.971926 ignition[902]: Stage: kargs Nov 5 04:53:17.972124 ignition[902]: no configs at "/usr/lib/ignition/base.d" Nov 5 04:53:17.972135 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:53:17.973232 ignition[902]: kargs: kargs passed Nov 5 04:53:17.973280 ignition[902]: Ignition finished successfully Nov 5 04:53:17.979201 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 04:53:17.982148 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 04:53:18.054857 ignition[910]: Ignition 2.22.0 Nov 5 04:53:18.054871 ignition[910]: Stage: disks Nov 5 04:53:18.055101 ignition[910]: no configs at "/usr/lib/ignition/base.d" Nov 5 04:53:18.055112 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:53:18.079843 ignition[910]: disks: disks passed Nov 5 04:53:18.079983 ignition[910]: Ignition finished successfully Nov 5 04:53:18.086081 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 04:53:18.088298 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 04:53:18.092464 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 04:53:18.097589 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 04:53:18.099960 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 04:53:18.103467 systemd[1]: Reached target basic.target - Basic System. Nov 5 04:53:18.109680 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 04:53:18.157810 systemd-fsck[920]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 04:53:18.227660 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 04:53:18.235207 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 04:53:18.370372 kernel: EXT4-fs (vda9): mounted filesystem d6ba737d-b2ad-4de6-9309-ffb105e40987 r/w with ordered data mode. Quota mode: none. Nov 5 04:53:18.371364 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 04:53:18.374882 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 04:53:18.380548 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 04:53:18.383610 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 04:53:18.385491 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 04:53:18.385542 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 04:53:18.385576 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 04:53:18.409920 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 04:53:18.414197 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 04:53:18.421951 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (928) Nov 5 04:53:18.421986 kernel: BTRFS info (device vda6): first mount of filesystem e7137982-ac37-41c2-8fd6-d0cf0728ebd4 Nov 5 04:53:18.422077 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 04:53:18.425993 kernel: BTRFS info (device vda6): turning on async discard Nov 5 04:53:18.426048 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 04:53:18.428141 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 04:53:18.479054 initrd-setup-root[952]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 04:53:18.484629 initrd-setup-root[959]: cut: /sysroot/etc/group: No such file or directory Nov 5 04:53:18.489018 initrd-setup-root[966]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 04:53:18.494175 initrd-setup-root[973]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 04:53:18.650515 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 04:53:18.654377 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 04:53:18.657481 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 04:53:18.681417 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 04:53:18.710216 kernel: BTRFS info (device vda6): last unmount of filesystem e7137982-ac37-41c2-8fd6-d0cf0728ebd4 Nov 5 04:53:18.733555 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 04:53:18.761915 ignition[1043]: INFO : Ignition 2.22.0 Nov 5 04:53:18.761915 ignition[1043]: INFO : Stage: mount Nov 5 04:53:18.765078 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 04:53:18.765078 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:53:18.765078 ignition[1043]: INFO : mount: mount passed Nov 5 04:53:18.765078 ignition[1043]: INFO : Ignition finished successfully Nov 5 04:53:18.765770 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 04:53:18.770098 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 04:53:18.800076 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 04:53:18.835350 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1054) Nov 5 04:53:18.838412 kernel: BTRFS info (device vda6): first mount of filesystem e7137982-ac37-41c2-8fd6-d0cf0728ebd4 Nov 5 04:53:18.838428 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 04:53:18.842276 kernel: BTRFS info (device vda6): turning on async discard Nov 5 04:53:18.842303 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 04:53:18.843999 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 04:53:18.956813 ignition[1071]: INFO : Ignition 2.22.0 Nov 5 04:53:18.956813 ignition[1071]: INFO : Stage: files Nov 5 04:53:18.960407 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 04:53:18.960407 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:53:18.960407 ignition[1071]: DEBUG : files: compiled without relabeling support, skipping Nov 5 04:53:18.960407 ignition[1071]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 04:53:18.960407 ignition[1071]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 04:53:18.972495 ignition[1071]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 04:53:18.972495 ignition[1071]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 04:53:18.972495 ignition[1071]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 04:53:18.972495 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 04:53:18.972495 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 04:53:18.964912 unknown[1071]: wrote ssh authorized keys file for user: core Nov 5 04:53:19.018174 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 04:53:19.101436 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 04:53:19.104949 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 04:53:19.104949 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 04:53:19.104949 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 04:53:19.115731 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 04:53:19.115731 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 04:53:19.115731 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 04:53:19.115731 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 04:53:19.115731 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 04:53:19.115731 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 04:53:19.115731 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 04:53:19.115731 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 04:53:19.115731 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 04:53:19.115731 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 04:53:19.115731 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 5 04:53:19.560923 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 04:53:20.395812 ignition[1071]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 04:53:20.395812 ignition[1071]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 04:53:20.401751 ignition[1071]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 04:53:20.405019 ignition[1071]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 04:53:20.405019 ignition[1071]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 04:53:20.405019 ignition[1071]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 5 04:53:20.405019 ignition[1071]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 04:53:20.405019 ignition[1071]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 04:53:20.405019 ignition[1071]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 5 04:53:20.405019 ignition[1071]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 04:53:20.440258 ignition[1071]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 04:53:20.631370 ignition[1071]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 04:53:20.634047 ignition[1071]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 04:53:20.634047 ignition[1071]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 5 04:53:20.634047 ignition[1071]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 04:53:20.634047 ignition[1071]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 04:53:20.634047 ignition[1071]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 04:53:20.634047 ignition[1071]: INFO : files: files passed Nov 5 04:53:20.634047 ignition[1071]: INFO : Ignition finished successfully Nov 5 04:53:20.651839 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 04:53:20.655407 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 04:53:20.659010 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 04:53:20.682789 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 04:53:20.682917 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 04:53:20.690448 initrd-setup-root-after-ignition[1102]: grep: /sysroot/oem/oem-release: No such file or directory Nov 5 04:53:20.695013 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 04:53:20.695013 initrd-setup-root-after-ignition[1104]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 04:53:20.700310 initrd-setup-root-after-ignition[1108]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 04:53:20.701409 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 04:53:20.705017 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 04:53:20.709528 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 04:53:20.781049 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 04:53:20.782656 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 04:53:20.787310 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 04:53:20.790740 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 04:53:20.794641 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 04:53:20.798168 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 04:53:20.834764 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 04:53:20.836509 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 04:53:20.863835 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 04:53:20.863979 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 04:53:20.865932 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 04:53:20.871239 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 04:53:20.873049 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 04:53:20.873167 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 04:53:20.878879 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 04:53:20.882375 systemd[1]: Stopped target basic.target - Basic System. Nov 5 04:53:20.882954 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 04:53:20.886532 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 04:53:20.887064 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 04:53:20.894897 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 04:53:20.896655 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 04:53:20.897182 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 04:53:20.904839 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 04:53:20.906794 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 04:53:20.907315 systemd[1]: Stopped target swap.target - Swaps. Nov 5 04:53:20.912704 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 04:53:20.912823 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 04:53:20.918123 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 04:53:20.921430 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 04:53:20.923112 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 04:53:20.926631 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 04:53:20.927029 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 04:53:20.927148 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 04:53:20.936480 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 04:53:20.936615 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 04:53:20.938385 systemd[1]: Stopped target paths.target - Path Units. Nov 5 04:53:20.942989 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 04:53:20.947402 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 04:53:20.947574 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 04:53:20.953107 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 04:53:20.954605 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 04:53:20.954699 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 04:53:20.955146 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 04:53:20.955231 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 04:53:20.960040 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 04:53:20.960158 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 04:53:20.964843 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 04:53:20.964954 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 04:53:20.970915 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 04:53:20.974202 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 04:53:20.974345 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 04:53:20.978466 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 04:53:20.986347 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 04:53:20.988063 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 04:53:20.992169 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 04:53:20.992293 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 04:53:20.995765 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 04:53:20.995876 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 04:53:21.006179 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 04:53:21.006334 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 04:53:21.082776 ignition[1128]: INFO : Ignition 2.22.0 Nov 5 04:53:21.082776 ignition[1128]: INFO : Stage: umount Nov 5 04:53:21.085548 ignition[1128]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 04:53:21.085548 ignition[1128]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:53:21.089245 ignition[1128]: INFO : umount: umount passed Nov 5 04:53:21.089245 ignition[1128]: INFO : Ignition finished successfully Nov 5 04:53:21.090269 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 04:53:21.096100 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 04:53:21.096238 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 04:53:21.098097 systemd[1]: Stopped target network.target - Network. Nov 5 04:53:21.102210 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 04:53:21.102270 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 04:53:21.103831 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 04:53:21.103888 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 04:53:21.104939 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 04:53:21.105004 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 04:53:21.105862 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 04:53:21.105911 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 04:53:21.112615 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 04:53:21.113146 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 04:53:21.133247 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 04:53:21.133453 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 04:53:21.139847 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 04:53:21.139988 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 04:53:21.147052 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 04:53:21.147252 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 04:53:21.147302 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 04:53:21.154567 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 04:53:21.156611 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 04:53:21.156676 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 04:53:21.158717 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 04:53:21.158772 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 04:53:21.162270 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 04:53:21.162347 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 04:53:21.165734 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 04:53:21.169998 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 04:53:21.175642 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 04:53:21.186575 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 04:53:21.186680 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 04:53:21.199688 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 04:53:21.199846 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 04:53:21.201986 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 04:53:21.202163 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 04:53:21.204989 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 04:53:21.205073 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 04:53:21.207680 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 04:53:21.207733 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 04:53:21.211213 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 04:53:21.211272 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 04:53:21.216127 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 04:53:21.216185 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 04:53:21.221892 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 04:53:21.221950 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 04:53:21.230191 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 04:53:21.233341 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 04:53:21.233403 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 04:53:21.239035 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 04:53:21.239094 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 04:53:21.240737 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 04:53:21.240789 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:53:21.266066 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 04:53:21.266223 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 04:53:21.269990 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 04:53:21.275221 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 04:53:21.300274 systemd[1]: Switching root. Nov 5 04:53:21.347341 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Nov 5 04:53:21.347405 systemd-journald[317]: Journal stopped Nov 5 04:53:22.833464 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 04:53:22.833536 kernel: SELinux: policy capability open_perms=1 Nov 5 04:53:22.833549 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 04:53:22.833561 kernel: SELinux: policy capability always_check_network=0 Nov 5 04:53:22.833592 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 04:53:22.833604 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 04:53:22.833617 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 04:53:22.833633 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 04:53:22.833649 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 04:53:22.833661 kernel: audit: type=1403 audit(1762318401.906:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 04:53:22.833675 systemd[1]: Successfully loaded SELinux policy in 73.180ms. Nov 5 04:53:22.833723 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.374ms. Nov 5 04:53:22.833745 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 04:53:22.833770 systemd[1]: Detected virtualization kvm. Nov 5 04:53:22.833799 systemd[1]: Detected architecture x86-64. Nov 5 04:53:22.833819 systemd[1]: Detected first boot. Nov 5 04:53:22.833853 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 04:53:22.833875 zram_generator::config[1175]: No configuration found. Nov 5 04:53:22.833910 kernel: Guest personality initialized and is inactive Nov 5 04:53:22.833937 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 04:53:22.833949 kernel: Initialized host personality Nov 5 04:53:22.833967 kernel: NET: Registered PF_VSOCK protocol family Nov 5 04:53:22.833983 systemd[1]: Populated /etc with preset unit settings. Nov 5 04:53:22.833996 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 04:53:22.834016 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 04:53:22.834029 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 04:53:22.834043 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 04:53:22.834056 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 04:53:22.834074 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 04:53:22.834086 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 04:53:22.834099 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 04:53:22.834122 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 04:53:22.834136 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 04:53:22.834148 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 04:53:22.834161 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 04:53:22.834175 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 04:53:22.834188 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 04:53:22.834201 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 04:53:22.834221 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 04:53:22.834235 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 04:53:22.834248 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 04:53:22.834261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 04:53:22.834274 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 04:53:22.834286 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 04:53:22.834306 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 04:53:22.834338 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 04:53:22.834351 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 04:53:22.834364 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 04:53:22.834377 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 04:53:22.834390 systemd[1]: Reached target slices.target - Slice Units. Nov 5 04:53:22.834403 systemd[1]: Reached target swap.target - Swaps. Nov 5 04:53:22.834416 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 04:53:22.834437 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 04:53:22.834453 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 04:53:22.834465 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 04:53:22.834479 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 04:53:22.834492 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 04:53:22.834505 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 04:53:22.834519 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 04:53:22.834567 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 04:53:22.834582 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 04:53:22.834596 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:53:22.834610 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 04:53:22.834623 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 04:53:22.834636 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 04:53:22.834654 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 04:53:22.834681 systemd[1]: Reached target machines.target - Containers. Nov 5 04:53:22.834694 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 04:53:22.834707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 04:53:22.834723 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 04:53:22.834736 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 04:53:22.834749 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 04:53:22.834769 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 04:53:22.834782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 04:53:22.834795 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 04:53:22.834808 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 04:53:22.834821 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 04:53:22.834835 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 04:53:22.834848 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 04:53:22.834869 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 04:53:22.834884 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 04:53:22.834898 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 04:53:22.834922 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 04:53:22.834936 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 04:53:22.834948 kernel: ACPI: bus type drm_connector registered Nov 5 04:53:22.834961 kernel: fuse: init (API version 7.41) Nov 5 04:53:22.834982 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 04:53:22.834995 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 04:53:22.835010 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 04:53:22.835026 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 04:53:22.835051 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:53:22.835064 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 04:53:22.835078 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 04:53:22.835113 systemd-journald[1259]: Collecting audit messages is disabled. Nov 5 04:53:22.835142 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 04:53:22.835164 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 04:53:22.835177 systemd-journald[1259]: Journal started Nov 5 04:53:22.835199 systemd-journald[1259]: Runtime Journal (/run/log/journal/685e163b44554e5b8cbf07e190e1d71f) is 6M, max 48.1M, 42M free. Nov 5 04:53:22.472674 systemd[1]: Queued start job for default target multi-user.target. Nov 5 04:53:22.486362 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 04:53:22.486914 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 04:53:22.839525 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 04:53:22.841792 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 04:53:22.843835 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 04:53:22.845829 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 04:53:22.848264 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 04:53:22.850681 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 04:53:22.850926 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 04:53:22.853435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 04:53:22.853745 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 04:53:22.855871 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 04:53:22.856099 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 04:53:22.858230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 04:53:22.858535 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 04:53:22.860973 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 04:53:22.861193 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 04:53:22.863438 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 04:53:22.863664 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 04:53:22.865835 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 04:53:22.868116 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 04:53:22.871229 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 04:53:22.874015 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 04:53:22.891316 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 04:53:22.893971 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 04:53:22.897619 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 04:53:22.900785 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 04:53:22.902747 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 04:53:22.902779 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 04:53:22.905447 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 04:53:22.908432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 04:53:22.910662 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 04:53:22.914271 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 04:53:22.917087 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 04:53:22.918804 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 04:53:22.921213 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 04:53:22.923063 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 04:53:22.925032 systemd-journald[1259]: Time spent on flushing to /var/log/journal/685e163b44554e5b8cbf07e190e1d71f is 19.398ms for 1054 entries. Nov 5 04:53:22.925032 systemd-journald[1259]: System Journal (/var/log/journal/685e163b44554e5b8cbf07e190e1d71f) is 8M, max 163.5M, 155.5M free. Nov 5 04:53:22.954730 systemd-journald[1259]: Received client request to flush runtime journal. Nov 5 04:53:22.930607 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 04:53:22.957700 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 04:53:22.961128 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 04:53:22.964571 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 04:53:22.966755 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 04:53:22.969273 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 04:53:22.973685 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 04:53:22.977118 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 04:53:22.985117 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 04:53:22.992828 kernel: loop1: detected capacity change from 0 to 119080 Nov 5 04:53:22.991221 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 04:53:23.014021 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 04:53:23.019375 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 04:53:23.025044 kernel: loop2: detected capacity change from 0 to 111544 Nov 5 04:53:23.025286 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 04:53:23.073896 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 04:53:23.087997 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 04:53:23.097409 kernel: loop3: detected capacity change from 0 to 229808 Nov 5 04:53:23.099714 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Nov 5 04:53:23.099745 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Nov 5 04:53:23.105975 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 04:53:23.124380 kernel: loop4: detected capacity change from 0 to 119080 Nov 5 04:53:23.137359 kernel: loop5: detected capacity change from 0 to 111544 Nov 5 04:53:23.140733 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 04:53:23.153352 kernel: loop6: detected capacity change from 0 to 229808 Nov 5 04:53:23.162714 (sd-merge)[1317]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 5 04:53:23.168364 (sd-merge)[1317]: Merged extensions into '/usr'. Nov 5 04:53:23.174498 systemd[1]: Reload requested from client PID 1293 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 04:53:23.174517 systemd[1]: Reloading... Nov 5 04:53:23.261348 zram_generator::config[1351]: No configuration found. Nov 5 04:53:23.269183 systemd-resolved[1309]: Positive Trust Anchors: Nov 5 04:53:23.269219 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 04:53:23.269227 systemd-resolved[1309]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 04:53:23.269280 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 04:53:23.281061 systemd-resolved[1309]: Defaulting to hostname 'linux'. Nov 5 04:53:23.490510 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 04:53:23.491075 systemd[1]: Reloading finished in 316 ms. Nov 5 04:53:23.527808 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 04:53:23.530095 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 04:53:23.535183 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 04:53:23.605403 systemd[1]: Starting ensure-sysext.service... Nov 5 04:53:23.608231 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 04:53:23.627242 systemd[1]: Reload requested from client PID 1387 ('systemctl') (unit ensure-sysext.service)... Nov 5 04:53:23.627261 systemd[1]: Reloading... Nov 5 04:53:23.631534 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 04:53:23.631568 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 04:53:23.631884 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 04:53:23.632190 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 04:53:23.633217 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 04:53:23.633514 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Nov 5 04:53:23.633587 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Nov 5 04:53:23.639202 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 04:53:23.639216 systemd-tmpfiles[1388]: Skipping /boot Nov 5 04:53:23.650653 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 04:53:23.650667 systemd-tmpfiles[1388]: Skipping /boot Nov 5 04:53:23.683367 zram_generator::config[1418]: No configuration found. Nov 5 04:53:23.879239 systemd[1]: Reloading finished in 251 ms. Nov 5 04:53:23.899615 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 04:53:23.926933 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 04:53:23.938838 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 04:53:23.942554 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 04:53:23.952058 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 04:53:23.955389 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 04:53:23.960590 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 04:53:23.963780 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 04:53:23.968287 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:53:23.969124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 04:53:23.972192 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 04:53:23.984726 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 04:53:23.989700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 04:53:23.993556 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 04:53:23.993760 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 04:53:23.993921 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:53:23.995901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 04:53:24.002721 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 04:53:24.005741 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 04:53:24.006793 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 04:53:24.020785 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 04:53:24.030077 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 04:53:24.030351 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 04:53:24.036633 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 04:53:24.041585 systemd-udevd[1462]: Using default interface naming scheme 'v257'. Nov 5 04:53:24.045742 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:53:24.046231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 04:53:24.048368 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 04:53:24.050561 augenrules[1491]: No rules Nov 5 04:53:24.051603 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 04:53:24.073166 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 04:53:24.074973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 04:53:24.075097 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 04:53:24.075202 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:53:24.076440 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 04:53:24.076726 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 04:53:24.081357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 04:53:24.081597 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 04:53:24.084566 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 04:53:24.084847 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 04:53:24.089520 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 04:53:24.093606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 04:53:24.093842 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 04:53:24.099362 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 04:53:24.116277 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:53:24.117676 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 04:53:24.119537 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 04:53:24.123451 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 04:53:24.134478 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 04:53:24.138715 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 04:53:24.145556 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 04:53:24.147447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 04:53:24.147498 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 04:53:24.150856 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 04:53:24.152711 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 04:53:24.152749 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:53:24.153580 systemd[1]: Finished ensure-sysext.service. Nov 5 04:53:24.155478 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 04:53:24.155695 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 04:53:24.156500 augenrules[1517]: /sbin/augenrules: No change Nov 5 04:53:24.164014 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 04:53:24.164633 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 04:53:24.309130 augenrules[1550]: No rules Nov 5 04:53:24.186923 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 04:53:24.312575 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 04:53:24.315396 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 04:53:24.336870 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 04:53:24.337157 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 04:53:24.337578 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 04:53:24.373994 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 04:53:24.379431 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 04:53:24.383514 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 04:53:24.398583 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 04:53:24.405499 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 04:53:24.503622 systemd-networkd[1531]: lo: Link UP Nov 5 04:53:24.503636 systemd-networkd[1531]: lo: Gained carrier Nov 5 04:53:24.505965 systemd-networkd[1531]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 04:53:24.505978 systemd-networkd[1531]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 04:53:24.507606 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 04:53:24.509996 systemd-networkd[1531]: eth0: Link UP Nov 5 04:53:24.510047 systemd[1]: Reached target network.target - Network. Nov 5 04:53:24.510222 systemd-networkd[1531]: eth0: Gained carrier Nov 5 04:53:24.510241 systemd-networkd[1531]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 04:53:24.513053 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 04:53:24.516383 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 04:53:24.522554 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 04:53:24.531492 systemd-networkd[1531]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 04:53:24.533432 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 04:53:24.550489 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 04:53:25.827006 systemd-resolved[1309]: Clock change detected. Flushing caches. Nov 5 04:53:25.830880 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Nov 5 04:53:25.836946 kernel: ACPI: button: Power Button [PWRF] Nov 5 04:53:25.830634 systemd-timesyncd[1547]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 5 04:53:25.830700 systemd-timesyncd[1547]: Initial clock synchronization to Wed 2025-11-05 04:53:25.826933 UTC. Nov 5 04:53:25.835706 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 04:53:25.845756 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 04:53:25.857687 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 04:53:25.918592 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 5 04:53:25.919299 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 5 04:53:25.919965 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 04:53:26.019485 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 04:53:26.081713 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 04:53:26.082760 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:53:26.126242 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 04:53:26.236455 kernel: kvm_amd: TSC scaling supported Nov 5 04:53:26.236644 kernel: kvm_amd: Nested Virtualization enabled Nov 5 04:53:26.236661 kernel: kvm_amd: Nested Paging enabled Nov 5 04:53:26.237256 kernel: kvm_amd: LBR virtualization supported Nov 5 04:53:26.239124 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 5 04:53:26.239160 kernel: kvm_amd: Virtual GIF supported Nov 5 04:53:26.260101 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:53:26.278911 kernel: EDAC MC: Ver: 3.0.0 Nov 5 04:53:26.281397 ldconfig[1459]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 04:53:26.288465 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 04:53:26.292306 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 04:53:26.328067 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 04:53:26.330194 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 04:53:26.332069 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 04:53:26.334065 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 04:53:26.336079 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 04:53:26.338106 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 04:53:26.339962 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 04:53:26.341976 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 04:53:26.343990 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 04:53:26.344031 systemd[1]: Reached target paths.target - Path Units. Nov 5 04:53:26.345522 systemd[1]: Reached target timers.target - Timer Units. Nov 5 04:53:26.348097 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 04:53:26.351527 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 04:53:26.355278 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 04:53:26.357574 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 04:53:26.359623 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 04:53:26.363784 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 04:53:26.365789 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 04:53:26.368499 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 04:53:26.371429 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 04:53:26.373219 systemd[1]: Reached target basic.target - Basic System. Nov 5 04:53:26.374785 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 04:53:26.374817 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 04:53:26.376157 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 04:53:26.379401 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 04:53:26.390381 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 04:53:26.393831 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 04:53:26.397289 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 04:53:26.399269 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 04:53:26.411202 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 04:53:26.414942 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 04:53:26.417993 jq[1613]: false Nov 5 04:53:26.418533 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 04:53:26.423140 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 04:53:26.425038 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Refreshing passwd entry cache Nov 5 04:53:26.423121 oslogin_cache_refresh[1615]: Refreshing passwd entry cache Nov 5 04:53:26.426724 extend-filesystems[1614]: Found /dev/vda6 Nov 5 04:53:26.429907 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 04:53:26.432657 extend-filesystems[1614]: Found /dev/vda9 Nov 5 04:53:26.434456 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Failure getting users, quitting Nov 5 04:53:26.434456 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 04:53:26.434456 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Refreshing group entry cache Nov 5 04:53:26.434080 oslogin_cache_refresh[1615]: Failure getting users, quitting Nov 5 04:53:26.434105 oslogin_cache_refresh[1615]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 04:53:26.434181 oslogin_cache_refresh[1615]: Refreshing group entry cache Nov 5 04:53:26.439160 extend-filesystems[1614]: Checking size of /dev/vda9 Nov 5 04:53:26.441769 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 04:53:26.443817 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 04:53:26.444607 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 04:53:26.447139 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 04:53:26.451180 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Failure getting groups, quitting Nov 5 04:53:26.451251 oslogin_cache_refresh[1615]: Failure getting groups, quitting Nov 5 04:53:26.451330 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 04:53:26.451401 oslogin_cache_refresh[1615]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 04:53:26.451679 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 04:53:26.453925 extend-filesystems[1614]: Resized partition /dev/vda9 Nov 5 04:53:26.461105 extend-filesystems[1640]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 04:53:26.463577 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 04:53:26.466898 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 04:53:26.467202 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 04:53:26.503482 jq[1638]: true Nov 5 04:53:26.467945 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 04:53:26.468375 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 04:53:26.471041 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 04:53:26.471541 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 04:53:26.475741 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 04:53:26.476270 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 04:53:26.507626 jq[1643]: true Nov 5 04:53:26.529918 tar[1642]: linux-amd64/LICENSE Nov 5 04:53:26.529918 tar[1642]: linux-amd64/helm Nov 5 04:53:26.537586 update_engine[1634]: I20251105 04:53:26.537459 1634 main.cc:92] Flatcar Update Engine starting Nov 5 04:53:26.629280 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 5 04:53:26.641926 systemd-logind[1631]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 04:53:26.641961 systemd-logind[1631]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 04:53:26.642378 systemd-logind[1631]: New seat seat0. Nov 5 04:53:26.647758 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 04:53:26.756832 dbus-daemon[1611]: [system] SELinux support is enabled Nov 5 04:53:26.767361 update_engine[1634]: I20251105 04:53:26.766340 1634 update_check_scheduler.cc:74] Next update check in 6m58s Nov 5 04:53:26.774819 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 04:53:26.778952 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 04:53:26.778984 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 04:53:26.780583 dbus-daemon[1611]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 5 04:53:26.781149 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 04:53:26.781229 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 04:53:26.783311 systemd[1]: Started update-engine.service - Update Engine. Nov 5 04:53:26.787632 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 04:53:26.819689 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 5 04:53:26.854574 extend-filesystems[1640]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 04:53:26.854574 extend-filesystems[1640]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 04:53:26.854574 extend-filesystems[1640]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 5 04:53:26.866025 extend-filesystems[1614]: Resized filesystem in /dev/vda9 Nov 5 04:53:26.856287 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 04:53:26.857232 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 04:53:26.879184 locksmithd[1679]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 04:53:26.889652 bash[1678]: Updated "/home/core/.ssh/authorized_keys" Nov 5 04:53:26.891705 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 04:53:26.895340 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 04:53:26.979620 sshd_keygen[1639]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 04:53:27.012619 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 04:53:27.043304 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 04:53:27.076134 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 04:53:27.076479 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 04:53:27.081413 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 04:53:27.107140 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 04:53:27.111998 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 04:53:27.117135 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 04:53:27.119185 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 04:53:27.171129 containerd[1646]: time="2025-11-05T04:53:27Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 04:53:27.171823 containerd[1646]: time="2025-11-05T04:53:27.171785951Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 5 04:53:27.186988 containerd[1646]: time="2025-11-05T04:53:27.186916791Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.984µs" Nov 5 04:53:27.186988 containerd[1646]: time="2025-11-05T04:53:27.186961375Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 04:53:27.187142 containerd[1646]: time="2025-11-05T04:53:27.187014054Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 04:53:27.187142 containerd[1646]: time="2025-11-05T04:53:27.187027058Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 04:53:27.187262 containerd[1646]: time="2025-11-05T04:53:27.187236060Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 04:53:27.187262 containerd[1646]: time="2025-11-05T04:53:27.187255757Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 04:53:27.187352 containerd[1646]: time="2025-11-05T04:53:27.187326670Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 04:53:27.187352 containerd[1646]: time="2025-11-05T04:53:27.187341427Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 04:53:27.187657 containerd[1646]: time="2025-11-05T04:53:27.187624759Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 04:53:27.187657 containerd[1646]: time="2025-11-05T04:53:27.187645447Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 04:53:27.187657 containerd[1646]: time="2025-11-05T04:53:27.187656308Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 04:53:27.187736 containerd[1646]: time="2025-11-05T04:53:27.187664834Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 5 04:53:27.187918 containerd[1646]: time="2025-11-05T04:53:27.187876220Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 5 04:53:27.187918 containerd[1646]: time="2025-11-05T04:53:27.187909683Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 04:53:27.188077 containerd[1646]: time="2025-11-05T04:53:27.188049896Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 04:53:27.188337 containerd[1646]: time="2025-11-05T04:53:27.188297640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 04:53:27.188375 containerd[1646]: time="2025-11-05T04:53:27.188337455Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 04:53:27.188375 containerd[1646]: time="2025-11-05T04:53:27.188348105Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 04:53:27.188413 containerd[1646]: time="2025-11-05T04:53:27.188395183Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 04:53:27.189061 containerd[1646]: time="2025-11-05T04:53:27.189013603Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 04:53:27.189200 containerd[1646]: time="2025-11-05T04:53:27.189102219Z" level=info msg="metadata content store policy set" policy=shared Nov 5 04:53:27.195283 tar[1642]: linux-amd64/README.md Nov 5 04:53:27.196661 containerd[1646]: time="2025-11-05T04:53:27.196602923Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 04:53:27.196771 containerd[1646]: time="2025-11-05T04:53:27.196678625Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 5 04:53:27.196941 containerd[1646]: time="2025-11-05T04:53:27.196912233Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 5 04:53:27.196941 containerd[1646]: time="2025-11-05T04:53:27.196937771Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 04:53:27.196987 containerd[1646]: time="2025-11-05T04:53:27.196956175Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 04:53:27.196987 containerd[1646]: time="2025-11-05T04:53:27.196972015Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 04:53:27.196987 containerd[1646]: time="2025-11-05T04:53:27.196984929Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 04:53:27.197064 containerd[1646]: time="2025-11-05T04:53:27.196996631Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 04:53:27.197064 containerd[1646]: time="2025-11-05T04:53:27.197012871Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 04:53:27.197064 containerd[1646]: time="2025-11-05T04:53:27.197028411Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 04:53:27.197064 containerd[1646]: time="2025-11-05T04:53:27.197041495Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 04:53:27.197064 containerd[1646]: time="2025-11-05T04:53:27.197055251Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 04:53:27.197148 containerd[1646]: time="2025-11-05T04:53:27.197067694Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 04:53:27.197148 containerd[1646]: time="2025-11-05T04:53:27.197087612Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 04:53:27.197275 containerd[1646]: time="2025-11-05T04:53:27.197249495Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 04:53:27.197297 containerd[1646]: time="2025-11-05T04:53:27.197284982Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 04:53:27.197342 containerd[1646]: time="2025-11-05T04:53:27.197321580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 04:53:27.197388 containerd[1646]: time="2025-11-05T04:53:27.197348811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 04:53:27.197388 containerd[1646]: time="2025-11-05T04:53:27.197373427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 04:53:27.197426 containerd[1646]: time="2025-11-05T04:53:27.197389247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 04:53:27.197447 containerd[1646]: time="2025-11-05T04:53:27.197422399Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 04:53:27.197447 containerd[1646]: time="2025-11-05T04:53:27.197442257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 04:53:27.197483 containerd[1646]: time="2025-11-05T04:53:27.197455962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 04:53:27.197483 containerd[1646]: time="2025-11-05T04:53:27.197469498Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 04:53:27.197527 containerd[1646]: time="2025-11-05T04:53:27.197482402Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 04:53:27.197527 containerd[1646]: time="2025-11-05T04:53:27.197515844Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 04:53:27.197619 containerd[1646]: time="2025-11-05T04:53:27.197595103Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 04:53:27.197642 containerd[1646]: time="2025-11-05T04:53:27.197619218Z" level=info msg="Start snapshots syncer" Nov 5 04:53:27.197683 containerd[1646]: time="2025-11-05T04:53:27.197662830Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 04:53:27.198036 containerd[1646]: time="2025-11-05T04:53:27.197988050Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 04:53:27.198174 containerd[1646]: time="2025-11-05T04:53:27.198115860Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 04:53:27.198255 containerd[1646]: time="2025-11-05T04:53:27.198230444Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 04:53:27.198418 containerd[1646]: time="2025-11-05T04:53:27.198392308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 04:53:27.198450 containerd[1646]: time="2025-11-05T04:53:27.198427313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 04:53:27.198450 containerd[1646]: time="2025-11-05T04:53:27.198444025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 04:53:27.198489 containerd[1646]: time="2025-11-05T04:53:27.198478219Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 04:53:27.198509 containerd[1646]: time="2025-11-05T04:53:27.198499078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 04:53:27.198529 containerd[1646]: time="2025-11-05T04:53:27.198517583Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 04:53:27.198549 containerd[1646]: time="2025-11-05T04:53:27.198532781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 04:53:27.198582 containerd[1646]: time="2025-11-05T04:53:27.198548431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 04:53:27.198582 containerd[1646]: time="2025-11-05T04:53:27.198565322Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 04:53:27.198625 containerd[1646]: time="2025-11-05T04:53:27.198602241Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 04:53:27.198648 containerd[1646]: time="2025-11-05T04:53:27.198620987Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 04:53:27.198648 containerd[1646]: time="2025-11-05T04:53:27.198634712Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 04:53:27.198706 containerd[1646]: time="2025-11-05T04:53:27.198648107Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 04:53:27.198706 containerd[1646]: time="2025-11-05T04:53:27.198662855Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 04:53:27.198706 containerd[1646]: time="2025-11-05T04:53:27.198677172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 04:53:27.198706 containerd[1646]: time="2025-11-05T04:53:27.198692140Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 04:53:27.198791 containerd[1646]: time="2025-11-05T04:53:27.198722397Z" level=info msg="runtime interface created" Nov 5 04:53:27.198791 containerd[1646]: time="2025-11-05T04:53:27.198733097Z" level=info msg="created NRI interface" Nov 5 04:53:27.198791 containerd[1646]: time="2025-11-05T04:53:27.198744067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 04:53:27.198791 containerd[1646]: time="2025-11-05T04:53:27.198758585Z" level=info msg="Connect containerd service" Nov 5 04:53:27.198956 containerd[1646]: time="2025-11-05T04:53:27.198818337Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 04:53:27.200191 containerd[1646]: time="2025-11-05T04:53:27.200151757Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 04:53:27.226181 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 04:53:27.392265 containerd[1646]: time="2025-11-05T04:53:27.392091474Z" level=info msg="Start subscribing containerd event" Nov 5 04:53:27.392761 containerd[1646]: time="2025-11-05T04:53:27.392658206Z" level=info msg="Start recovering state" Nov 5 04:53:27.392966 containerd[1646]: time="2025-11-05T04:53:27.392802036Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 04:53:27.392966 containerd[1646]: time="2025-11-05T04:53:27.392910299Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 04:53:27.392966 containerd[1646]: time="2025-11-05T04:53:27.392946106Z" level=info msg="Start event monitor" Nov 5 04:53:27.393383 containerd[1646]: time="2025-11-05T04:53:27.392979318Z" level=info msg="Start cni network conf syncer for default" Nov 5 04:53:27.393383 containerd[1646]: time="2025-11-05T04:53:27.393017540Z" level=info msg="Start streaming server" Nov 5 04:53:27.393383 containerd[1646]: time="2025-11-05T04:53:27.393042317Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 04:53:27.393383 containerd[1646]: time="2025-11-05T04:53:27.393053528Z" level=info msg="runtime interface starting up..." Nov 5 04:53:27.393383 containerd[1646]: time="2025-11-05T04:53:27.393065250Z" level=info msg="starting plugins..." Nov 5 04:53:27.393383 containerd[1646]: time="2025-11-05T04:53:27.393098903Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 04:53:27.393618 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 04:53:27.393922 containerd[1646]: time="2025-11-05T04:53:27.393805848Z" level=info msg="containerd successfully booted in 0.223476s" Nov 5 04:53:27.614155 systemd-networkd[1531]: eth0: Gained IPv6LL Nov 5 04:53:27.618213 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 04:53:27.620965 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 04:53:27.624322 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 5 04:53:27.627711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:53:27.647484 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 04:53:27.668277 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 04:53:27.668650 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 5 04:53:27.671260 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 04:53:27.676572 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 04:53:28.336666 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 04:53:28.340366 systemd[1]: Started sshd@0-10.0.0.99:22-10.0.0.1:45050.service - OpenSSH per-connection server daemon (10.0.0.1:45050). Nov 5 04:53:28.433942 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 45050 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:53:28.450089 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:53:28.458725 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 04:53:28.462147 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 04:53:28.471362 systemd-logind[1631]: New session 1 of user core. Nov 5 04:53:28.489288 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 04:53:28.496280 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 04:53:28.522303 (systemd)[1752]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 04:53:28.525466 systemd-logind[1631]: New session c1 of user core. Nov 5 04:53:28.726503 systemd[1752]: Queued start job for default target default.target. Nov 5 04:53:28.744649 systemd[1752]: Created slice app.slice - User Application Slice. Nov 5 04:53:28.744708 systemd[1752]: Reached target paths.target - Paths. Nov 5 04:53:28.744813 systemd[1752]: Reached target timers.target - Timers. Nov 5 04:53:28.746986 systemd[1752]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 04:53:28.768817 systemd[1752]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 04:53:28.768976 systemd[1752]: Reached target sockets.target - Sockets. Nov 5 04:53:28.769019 systemd[1752]: Reached target basic.target - Basic System. Nov 5 04:53:28.769066 systemd[1752]: Reached target default.target - Main User Target. Nov 5 04:53:28.769101 systemd[1752]: Startup finished in 232ms. Nov 5 04:53:28.769585 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 04:53:28.809797 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 04:53:28.834304 systemd[1]: Started sshd@1-10.0.0.99:22-10.0.0.1:45058.service - OpenSSH per-connection server daemon (10.0.0.1:45058). Nov 5 04:53:28.909126 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 45058 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:53:28.911170 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:53:28.916472 systemd-logind[1631]: New session 2 of user core. Nov 5 04:53:28.926156 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 04:53:28.945538 sshd[1766]: Connection closed by 10.0.0.1 port 45058 Nov 5 04:53:28.946184 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Nov 5 04:53:29.068033 systemd[1]: sshd@1-10.0.0.99:22-10.0.0.1:45058.service: Deactivated successfully. Nov 5 04:53:29.070082 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 04:53:29.070827 systemd-logind[1631]: Session 2 logged out. Waiting for processes to exit. Nov 5 04:53:29.074398 systemd[1]: Started sshd@2-10.0.0.99:22-10.0.0.1:45070.service - OpenSSH per-connection server daemon (10.0.0.1:45070). Nov 5 04:53:29.077526 systemd-logind[1631]: Removed session 2. Nov 5 04:53:29.134748 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 45070 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:53:29.136029 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:53:29.140720 systemd-logind[1631]: New session 3 of user core. Nov 5 04:53:29.150030 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 04:53:29.165745 sshd[1775]: Connection closed by 10.0.0.1 port 45070 Nov 5 04:53:29.166083 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Nov 5 04:53:29.170543 systemd[1]: sshd@2-10.0.0.99:22-10.0.0.1:45070.service: Deactivated successfully. Nov 5 04:53:29.172605 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 04:53:29.174161 systemd-logind[1631]: Session 3 logged out. Waiting for processes to exit. Nov 5 04:53:29.175473 systemd-logind[1631]: Removed session 3. Nov 5 04:53:29.226129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:53:29.236801 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 04:53:29.237695 (kubelet)[1785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 04:53:29.238793 systemd[1]: Startup finished in 3.487s (kernel) + 7.022s (initrd) + 6.127s (userspace) = 16.637s. Nov 5 04:53:29.871078 kubelet[1785]: E1105 04:53:29.871001 1785 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 04:53:29.875764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 04:53:29.876049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 04:53:29.876482 systemd[1]: kubelet.service: Consumed 2.009s CPU time, 268M memory peak. Nov 5 04:53:39.182944 systemd[1]: Started sshd@3-10.0.0.99:22-10.0.0.1:59400.service - OpenSSH per-connection server daemon (10.0.0.1:59400). Nov 5 04:53:39.236488 sshd[1798]: Accepted publickey for core from 10.0.0.1 port 59400 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:53:39.237927 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:53:39.242610 systemd-logind[1631]: New session 4 of user core. Nov 5 04:53:39.256000 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 04:53:39.270739 sshd[1801]: Connection closed by 10.0.0.1 port 59400 Nov 5 04:53:39.271080 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Nov 5 04:53:39.287065 systemd[1]: sshd@3-10.0.0.99:22-10.0.0.1:59400.service: Deactivated successfully. Nov 5 04:53:39.289187 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 04:53:39.289923 systemd-logind[1631]: Session 4 logged out. Waiting for processes to exit. Nov 5 04:53:39.293053 systemd[1]: Started sshd@4-10.0.0.99:22-10.0.0.1:59408.service - OpenSSH per-connection server daemon (10.0.0.1:59408). Nov 5 04:53:39.293580 systemd-logind[1631]: Removed session 4. Nov 5 04:53:39.354795 sshd[1807]: Accepted publickey for core from 10.0.0.1 port 59408 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:53:39.356587 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:53:39.361604 systemd-logind[1631]: New session 5 of user core. Nov 5 04:53:39.375995 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 04:53:39.384965 sshd[1810]: Connection closed by 10.0.0.1 port 59408 Nov 5 04:53:39.385273 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Nov 5 04:53:39.403501 systemd[1]: sshd@4-10.0.0.99:22-10.0.0.1:59408.service: Deactivated successfully. Nov 5 04:53:39.405422 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 04:53:39.406198 systemd-logind[1631]: Session 5 logged out. Waiting for processes to exit. Nov 5 04:53:39.408902 systemd[1]: Started sshd@5-10.0.0.99:22-10.0.0.1:59416.service - OpenSSH per-connection server daemon (10.0.0.1:59416). Nov 5 04:53:39.409437 systemd-logind[1631]: Removed session 5. Nov 5 04:53:39.465112 sshd[1816]: Accepted publickey for core from 10.0.0.1 port 59416 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:53:39.466350 sshd-session[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:53:39.470806 systemd-logind[1631]: New session 6 of user core. Nov 5 04:53:39.479987 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 04:53:39.493819 sshd[1820]: Connection closed by 10.0.0.1 port 59416 Nov 5 04:53:39.494273 sshd-session[1816]: pam_unix(sshd:session): session closed for user core Nov 5 04:53:39.502361 systemd[1]: sshd@5-10.0.0.99:22-10.0.0.1:59416.service: Deactivated successfully. Nov 5 04:53:39.504338 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 04:53:39.505244 systemd-logind[1631]: Session 6 logged out. Waiting for processes to exit. Nov 5 04:53:39.508253 systemd[1]: Started sshd@6-10.0.0.99:22-10.0.0.1:59420.service - OpenSSH per-connection server daemon (10.0.0.1:59420). Nov 5 04:53:39.508838 systemd-logind[1631]: Removed session 6. Nov 5 04:53:39.576179 sshd[1826]: Accepted publickey for core from 10.0.0.1 port 59420 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:53:39.577562 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:53:39.582119 systemd-logind[1631]: New session 7 of user core. Nov 5 04:53:39.591992 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 04:53:39.614247 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 04:53:39.614569 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 04:53:39.635686 sudo[1830]: pam_unix(sudo:session): session closed for user root Nov 5 04:53:39.637931 sshd[1829]: Connection closed by 10.0.0.1 port 59420 Nov 5 04:53:39.638408 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Nov 5 04:53:39.661516 systemd[1]: sshd@6-10.0.0.99:22-10.0.0.1:59420.service: Deactivated successfully. Nov 5 04:53:39.663439 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 04:53:39.664208 systemd-logind[1631]: Session 7 logged out. Waiting for processes to exit. Nov 5 04:53:39.667193 systemd[1]: Started sshd@7-10.0.0.99:22-10.0.0.1:59428.service - OpenSSH per-connection server daemon (10.0.0.1:59428). Nov 5 04:53:39.667734 systemd-logind[1631]: Removed session 7. Nov 5 04:53:39.724033 sshd[1836]: Accepted publickey for core from 10.0.0.1 port 59428 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:53:39.725256 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:53:39.729713 systemd-logind[1631]: New session 8 of user core. Nov 5 04:53:39.744087 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 04:53:39.758715 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 04:53:39.759067 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 04:53:39.765581 sudo[1842]: pam_unix(sudo:session): session closed for user root Nov 5 04:53:39.773840 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 04:53:39.774191 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 04:53:39.785474 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 04:53:39.829983 augenrules[1864]: No rules Nov 5 04:53:39.831700 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 04:53:39.832112 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 04:53:39.833447 sudo[1841]: pam_unix(sudo:session): session closed for user root Nov 5 04:53:39.835385 sshd[1840]: Connection closed by 10.0.0.1 port 59428 Nov 5 04:53:39.835656 sshd-session[1836]: pam_unix(sshd:session): session closed for user core Nov 5 04:53:39.844301 systemd[1]: sshd@7-10.0.0.99:22-10.0.0.1:59428.service: Deactivated successfully. Nov 5 04:53:39.845981 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 04:53:39.846681 systemd-logind[1631]: Session 8 logged out. Waiting for processes to exit. Nov 5 04:53:39.849303 systemd[1]: Started sshd@8-10.0.0.99:22-10.0.0.1:59430.service - OpenSSH per-connection server daemon (10.0.0.1:59430). Nov 5 04:53:39.849825 systemd-logind[1631]: Removed session 8. Nov 5 04:53:39.891733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 04:53:39.893380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:53:39.903051 sshd[1873]: Accepted publickey for core from 10.0.0.1 port 59430 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:53:39.904472 sshd-session[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:53:39.908964 systemd-logind[1631]: New session 9 of user core. Nov 5 04:53:39.923038 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 04:53:39.938063 sudo[1880]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 04:53:39.938460 sudo[1880]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 04:53:40.178907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:53:40.183589 (kubelet)[1899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 04:53:40.282665 kubelet[1899]: E1105 04:53:40.282588 1899 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 04:53:40.289560 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 04:53:40.289767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 04:53:40.290406 systemd[1]: kubelet.service: Consumed 357ms CPU time, 110.2M memory peak. Nov 5 04:53:40.932223 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 04:53:40.966337 (dockerd)[1915]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 04:53:41.598196 dockerd[1915]: time="2025-11-05T04:53:41.598106608Z" level=info msg="Starting up" Nov 5 04:53:41.600198 dockerd[1915]: time="2025-11-05T04:53:41.600164076Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 04:53:41.626895 dockerd[1915]: time="2025-11-05T04:53:41.626825045Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 04:53:41.753369 dockerd[1915]: time="2025-11-05T04:53:41.753281341Z" level=info msg="Loading containers: start." Nov 5 04:53:41.767685 kernel: Initializing XFRM netlink socket Nov 5 04:53:42.075711 systemd-networkd[1531]: docker0: Link UP Nov 5 04:53:42.398033 dockerd[1915]: time="2025-11-05T04:53:42.397798831Z" level=info msg="Loading containers: done." Nov 5 04:53:42.414986 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1811076859-merged.mount: Deactivated successfully. Nov 5 04:53:42.416393 dockerd[1915]: time="2025-11-05T04:53:42.416334566Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 04:53:42.416465 dockerd[1915]: time="2025-11-05T04:53:42.416450503Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 04:53:42.416576 dockerd[1915]: time="2025-11-05T04:53:42.416558265Z" level=info msg="Initializing buildkit" Nov 5 04:53:42.449290 dockerd[1915]: time="2025-11-05T04:53:42.449235966Z" level=info msg="Completed buildkit initialization" Nov 5 04:53:42.453659 dockerd[1915]: time="2025-11-05T04:53:42.453623132Z" level=info msg="Daemon has completed initialization" Nov 5 04:53:42.453761 dockerd[1915]: time="2025-11-05T04:53:42.453705586Z" level=info msg="API listen on /run/docker.sock" Nov 5 04:53:42.453940 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 04:53:43.474261 containerd[1646]: time="2025-11-05T04:53:43.474181578Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 04:53:44.172956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1867197271.mount: Deactivated successfully. Nov 5 04:53:46.417321 containerd[1646]: time="2025-11-05T04:53:46.417232829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:46.419680 containerd[1646]: time="2025-11-05T04:53:46.419255411Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=28442726" Nov 5 04:53:46.422366 containerd[1646]: time="2025-11-05T04:53:46.422306021Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:46.425518 containerd[1646]: time="2025-11-05T04:53:46.425469633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:46.426591 containerd[1646]: time="2025-11-05T04:53:46.426541763Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.952272801s" Nov 5 04:53:46.426668 containerd[1646]: time="2025-11-05T04:53:46.426608999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 5 04:53:46.428069 containerd[1646]: time="2025-11-05T04:53:46.428038850Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 04:53:48.062680 containerd[1646]: time="2025-11-05T04:53:48.062606955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:48.063423 containerd[1646]: time="2025-11-05T04:53:48.063363503Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26012689" Nov 5 04:53:48.064513 containerd[1646]: time="2025-11-05T04:53:48.064482201Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:48.069777 containerd[1646]: time="2025-11-05T04:53:48.069724711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:48.070652 containerd[1646]: time="2025-11-05T04:53:48.070589121Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.642521037s" Nov 5 04:53:48.070697 containerd[1646]: time="2025-11-05T04:53:48.070652731Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 5 04:53:48.071300 containerd[1646]: time="2025-11-05T04:53:48.071261472Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 04:53:50.101768 containerd[1646]: time="2025-11-05T04:53:50.101673504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:50.103330 containerd[1646]: time="2025-11-05T04:53:50.103275648Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20147431" Nov 5 04:53:50.104892 containerd[1646]: time="2025-11-05T04:53:50.104836535Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:50.107601 containerd[1646]: time="2025-11-05T04:53:50.107562646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:50.108833 containerd[1646]: time="2025-11-05T04:53:50.108755182Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.037429871s" Nov 5 04:53:50.108833 containerd[1646]: time="2025-11-05T04:53:50.108812520Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 5 04:53:50.109487 containerd[1646]: time="2025-11-05T04:53:50.109407535Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 04:53:50.540557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 04:53:50.542569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:53:50.753578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:53:50.777184 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 04:53:51.017912 kubelet[2211]: E1105 04:53:51.017748 2211 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 04:53:51.021999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 04:53:51.022206 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 04:53:51.022730 systemd[1]: kubelet.service: Consumed 300ms CPU time, 112.6M memory peak. Nov 5 04:53:51.895605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265710836.mount: Deactivated successfully. Nov 5 04:53:52.951151 containerd[1646]: time="2025-11-05T04:53:52.951089729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:52.951818 containerd[1646]: time="2025-11-05T04:53:52.951763692Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31925747" Nov 5 04:53:52.952908 containerd[1646]: time="2025-11-05T04:53:52.952881508Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:52.954753 containerd[1646]: time="2025-11-05T04:53:52.954724424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:52.955296 containerd[1646]: time="2025-11-05T04:53:52.955247615Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.845756894s" Nov 5 04:53:52.955332 containerd[1646]: time="2025-11-05T04:53:52.955295234Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 5 04:53:52.955884 containerd[1646]: time="2025-11-05T04:53:52.955818645Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 04:53:53.606738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3624034786.mount: Deactivated successfully. Nov 5 04:53:55.012127 containerd[1646]: time="2025-11-05T04:53:55.012034099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:55.013220 containerd[1646]: time="2025-11-05T04:53:55.013168606Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Nov 5 04:53:55.014808 containerd[1646]: time="2025-11-05T04:53:55.014767674Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:55.017342 containerd[1646]: time="2025-11-05T04:53:55.017296596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:55.018207 containerd[1646]: time="2025-11-05T04:53:55.018170184Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.062320831s" Nov 5 04:53:55.018207 containerd[1646]: time="2025-11-05T04:53:55.018206202Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 5 04:53:55.018869 containerd[1646]: time="2025-11-05T04:53:55.018703945Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 04:53:55.444542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1141710918.mount: Deactivated successfully. Nov 5 04:53:55.450558 containerd[1646]: time="2025-11-05T04:53:55.450505733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 04:53:55.451204 containerd[1646]: time="2025-11-05T04:53:55.451153648Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 04:53:55.452276 containerd[1646]: time="2025-11-05T04:53:55.452239654Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 04:53:55.454181 containerd[1646]: time="2025-11-05T04:53:55.454137142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 04:53:55.454732 containerd[1646]: time="2025-11-05T04:53:55.454696992Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 435.967208ms" Nov 5 04:53:55.454732 containerd[1646]: time="2025-11-05T04:53:55.454725746Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 04:53:55.455191 containerd[1646]: time="2025-11-05T04:53:55.455165240Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 04:53:55.920966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781208928.mount: Deactivated successfully. Nov 5 04:53:58.728685 containerd[1646]: time="2025-11-05T04:53:58.728598055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:58.729425 containerd[1646]: time="2025-11-05T04:53:58.729358882Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Nov 5 04:53:58.730595 containerd[1646]: time="2025-11-05T04:53:58.730537061Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:58.733134 containerd[1646]: time="2025-11-05T04:53:58.733095318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:53:58.734299 containerd[1646]: time="2025-11-05T04:53:58.734255273Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.279067931s" Nov 5 04:53:58.734299 containerd[1646]: time="2025-11-05T04:53:58.734291400Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 5 04:54:01.242384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 5 04:54:01.244389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:54:01.474976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:54:01.491181 (kubelet)[2372]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 04:54:01.536482 kubelet[2372]: E1105 04:54:01.536268 2372 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 04:54:01.540593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 04:54:01.540794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 04:54:01.541244 systemd[1]: kubelet.service: Consumed 223ms CPU time, 110.7M memory peak. Nov 5 04:54:02.345628 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:54:02.345801 systemd[1]: kubelet.service: Consumed 223ms CPU time, 110.7M memory peak. Nov 5 04:54:02.348221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:54:02.380030 systemd[1]: Reload requested from client PID 2388 ('systemctl') (unit session-9.scope)... Nov 5 04:54:02.380062 systemd[1]: Reloading... Nov 5 04:54:02.478925 zram_generator::config[2436]: No configuration found. Nov 5 04:54:03.291936 systemd[1]: Reloading finished in 911 ms. Nov 5 04:54:03.375753 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 04:54:03.375904 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 04:54:03.376290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:54:03.376346 systemd[1]: kubelet.service: Consumed 158ms CPU time, 98.3M memory peak. Nov 5 04:54:03.378100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:54:03.587758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:54:03.592075 (kubelet)[2480]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 04:54:03.637310 kubelet[2480]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 04:54:03.637310 kubelet[2480]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 04:54:03.637310 kubelet[2480]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 04:54:03.637762 kubelet[2480]: I1105 04:54:03.637362 2480 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 04:54:04.343631 kubelet[2480]: I1105 04:54:04.343549 2480 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 04:54:04.343631 kubelet[2480]: I1105 04:54:04.343604 2480 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 04:54:04.344006 kubelet[2480]: I1105 04:54:04.343978 2480 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 04:54:04.418597 kubelet[2480]: I1105 04:54:04.418526 2480 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 04:54:04.419061 kubelet[2480]: E1105 04:54:04.419024 2480 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 04:54:04.425774 kubelet[2480]: I1105 04:54:04.425732 2480 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 04:54:04.431876 kubelet[2480]: I1105 04:54:04.431812 2480 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 04:54:04.432243 kubelet[2480]: I1105 04:54:04.432199 2480 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 04:54:04.432409 kubelet[2480]: I1105 04:54:04.432233 2480 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 04:54:04.432409 kubelet[2480]: I1105 04:54:04.432408 2480 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 04:54:04.432592 kubelet[2480]: I1105 04:54:04.432418 2480 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 04:54:04.432663 kubelet[2480]: I1105 04:54:04.432642 2480 state_mem.go:36] "Initialized new in-memory state store" Nov 5 04:54:04.434696 kubelet[2480]: I1105 04:54:04.434666 2480 kubelet.go:480] "Attempting to sync node with API server" Nov 5 04:54:04.434696 kubelet[2480]: I1105 04:54:04.434685 2480 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 04:54:04.434751 kubelet[2480]: I1105 04:54:04.434707 2480 kubelet.go:386] "Adding apiserver pod source" Nov 5 04:54:04.437260 kubelet[2480]: I1105 04:54:04.436110 2480 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 04:54:04.442776 kubelet[2480]: I1105 04:54:04.442742 2480 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 5 04:54:04.443397 kubelet[2480]: E1105 04:54:04.443351 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 04:54:04.443509 kubelet[2480]: I1105 04:54:04.443474 2480 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 04:54:04.443654 kubelet[2480]: E1105 04:54:04.443611 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 04:54:04.444393 kubelet[2480]: W1105 04:54:04.444369 2480 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 04:54:04.447986 kubelet[2480]: I1105 04:54:04.447965 2480 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 04:54:04.448039 kubelet[2480]: I1105 04:54:04.448026 2480 server.go:1289] "Started kubelet" Nov 5 04:54:04.450387 kubelet[2480]: I1105 04:54:04.449245 2480 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 04:54:04.450387 kubelet[2480]: I1105 04:54:04.449544 2480 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 04:54:04.450387 kubelet[2480]: I1105 04:54:04.450201 2480 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 04:54:04.450387 kubelet[2480]: I1105 04:54:04.450349 2480 server.go:317] "Adding debug handlers to kubelet server" Nov 5 04:54:04.450387 kubelet[2480]: I1105 04:54:04.450373 2480 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 04:54:04.455558 kubelet[2480]: I1105 04:54:04.454850 2480 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 04:54:04.455621 kubelet[2480]: E1105 04:54:04.455576 2480 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 04:54:04.455621 kubelet[2480]: I1105 04:54:04.455617 2480 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 04:54:04.455789 kubelet[2480]: E1105 04:54:04.454329 2480 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18750349be70f569 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 04:54:04.447987049 +0000 UTC m=+0.851678974,LastTimestamp:2025-11-05 04:54:04.447987049 +0000 UTC m=+0.851678974,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 04:54:04.455932 kubelet[2480]: I1105 04:54:04.455901 2480 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 04:54:04.456002 kubelet[2480]: I1105 04:54:04.455973 2480 reconciler.go:26] "Reconciler: start to sync state" Nov 5 04:54:04.456412 kubelet[2480]: I1105 04:54:04.456394 2480 factory.go:223] Registration of the systemd container factory successfully Nov 5 04:54:04.456587 kubelet[2480]: E1105 04:54:04.456530 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 04:54:04.456587 kubelet[2480]: I1105 04:54:04.456550 2480 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 04:54:04.457905 kubelet[2480]: E1105 04:54:04.456828 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="200ms" Nov 5 04:54:04.458359 kubelet[2480]: E1105 04:54:04.458335 2480 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 04:54:04.459149 kubelet[2480]: I1105 04:54:04.459045 2480 factory.go:223] Registration of the containerd container factory successfully Nov 5 04:54:04.473832 kubelet[2480]: I1105 04:54:04.473809 2480 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 04:54:04.473832 kubelet[2480]: I1105 04:54:04.473824 2480 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 04:54:04.473832 kubelet[2480]: I1105 04:54:04.473839 2480 state_mem.go:36] "Initialized new in-memory state store" Nov 5 04:54:04.477521 kubelet[2480]: I1105 04:54:04.477153 2480 policy_none.go:49] "None policy: Start" Nov 5 04:54:04.477521 kubelet[2480]: I1105 04:54:04.477198 2480 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 04:54:04.477521 kubelet[2480]: I1105 04:54:04.477214 2480 state_mem.go:35] "Initializing new in-memory state store" Nov 5 04:54:04.478832 kubelet[2480]: I1105 04:54:04.478770 2480 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 04:54:04.482272 kubelet[2480]: I1105 04:54:04.482243 2480 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 04:54:04.482333 kubelet[2480]: I1105 04:54:04.482285 2480 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 04:54:04.482333 kubelet[2480]: I1105 04:54:04.482309 2480 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 04:54:04.482333 kubelet[2480]: I1105 04:54:04.482321 2480 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 04:54:04.482409 kubelet[2480]: E1105 04:54:04.482383 2480 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 04:54:04.484357 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 04:54:04.485643 kubelet[2480]: E1105 04:54:04.485355 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 04:54:04.498551 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 04:54:04.504289 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 04:54:04.526108 kubelet[2480]: E1105 04:54:04.525964 2480 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 04:54:04.526314 kubelet[2480]: I1105 04:54:04.526261 2480 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 04:54:04.526314 kubelet[2480]: I1105 04:54:04.526276 2480 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 04:54:04.526695 kubelet[2480]: I1105 04:54:04.526663 2480 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 04:54:04.528112 kubelet[2480]: E1105 04:54:04.528085 2480 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 04:54:04.528251 kubelet[2480]: E1105 04:54:04.528216 2480 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 04:54:04.595836 systemd[1]: Created slice kubepods-burstable-podedea3c35b2b2f22f72ebade0afad9ca2.slice - libcontainer container kubepods-burstable-podedea3c35b2b2f22f72ebade0afad9ca2.slice. Nov 5 04:54:04.613834 kubelet[2480]: E1105 04:54:04.613793 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:54:04.618438 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 5 04:54:04.623010 kubelet[2480]: E1105 04:54:04.622973 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:54:04.625422 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 5 04:54:04.627087 kubelet[2480]: E1105 04:54:04.627041 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:54:04.628171 kubelet[2480]: I1105 04:54:04.628148 2480 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:54:04.628596 kubelet[2480]: E1105 04:54:04.628559 2480 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Nov 5 04:54:04.658461 kubelet[2480]: E1105 04:54:04.658396 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="400ms" Nov 5 04:54:04.757056 kubelet[2480]: I1105 04:54:04.756969 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:54:04.757056 kubelet[2480]: I1105 04:54:04.757038 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:54:04.757056 kubelet[2480]: I1105 04:54:04.757068 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edea3c35b2b2f22f72ebade0afad9ca2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"edea3c35b2b2f22f72ebade0afad9ca2\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:54:04.757283 kubelet[2480]: I1105 04:54:04.757131 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:54:04.757283 kubelet[2480]: I1105 04:54:04.757199 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:54:04.757283 kubelet[2480]: I1105 04:54:04.757226 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:54:04.757283 kubelet[2480]: I1105 04:54:04.757250 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 5 04:54:04.757283 kubelet[2480]: I1105 04:54:04.757264 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edea3c35b2b2f22f72ebade0afad9ca2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"edea3c35b2b2f22f72ebade0afad9ca2\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:54:04.757407 kubelet[2480]: I1105 04:54:04.757282 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edea3c35b2b2f22f72ebade0afad9ca2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"edea3c35b2b2f22f72ebade0afad9ca2\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:54:04.830753 kubelet[2480]: I1105 04:54:04.830684 2480 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:54:04.831401 kubelet[2480]: E1105 04:54:04.831350 2480 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Nov 5 04:54:04.915617 kubelet[2480]: E1105 04:54:04.915423 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:04.916390 containerd[1646]: time="2025-11-05T04:54:04.916333266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:edea3c35b2b2f22f72ebade0afad9ca2,Namespace:kube-system,Attempt:0,}" Nov 5 04:54:04.924915 kubelet[2480]: E1105 04:54:04.923647 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:04.924988 containerd[1646]: time="2025-11-05T04:54:04.924299920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 5 04:54:04.928594 kubelet[2480]: E1105 04:54:04.928543 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:04.929194 containerd[1646]: time="2025-11-05T04:54:04.929154199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 5 04:54:04.954294 containerd[1646]: time="2025-11-05T04:54:04.954226873Z" level=info msg="connecting to shim 916898a3e9236c3be474131c7287fcff706661f008a5f594584f4026aaa01bee" address="unix:///run/containerd/s/6a71ff505339f808f9cdc69ecc089cee9d3ebd33a680daa6f0d02f3105496f16" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:54:04.972437 containerd[1646]: time="2025-11-05T04:54:04.969089000Z" level=info msg="connecting to shim dc9d82f522a998e1d521cb2e255512f21a6af7612638f5ab5a0d8220dfc38813" address="unix:///run/containerd/s/57aa95f3a7507cfc175a0783d07574f1ee9fc34a8d61f777ad0eb44e664d2c83" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:54:04.973590 containerd[1646]: time="2025-11-05T04:54:04.973515200Z" level=info msg="connecting to shim f3210cb7fb43f2658ac143ab2b6af002eecc9b458ce27bf99225f50e9f6997e0" address="unix:///run/containerd/s/65fadff05730e1f76a01094ce8d145e192de82795b42fbc9c59ec57fbb2be559" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:54:05.006012 systemd[1]: Started cri-containerd-916898a3e9236c3be474131c7287fcff706661f008a5f594584f4026aaa01bee.scope - libcontainer container 916898a3e9236c3be474131c7287fcff706661f008a5f594584f4026aaa01bee. Nov 5 04:54:05.010711 systemd[1]: Started cri-containerd-dc9d82f522a998e1d521cb2e255512f21a6af7612638f5ab5a0d8220dfc38813.scope - libcontainer container dc9d82f522a998e1d521cb2e255512f21a6af7612638f5ab5a0d8220dfc38813. Nov 5 04:54:05.028002 systemd[1]: Started cri-containerd-f3210cb7fb43f2658ac143ab2b6af002eecc9b458ce27bf99225f50e9f6997e0.scope - libcontainer container f3210cb7fb43f2658ac143ab2b6af002eecc9b458ce27bf99225f50e9f6997e0. Nov 5 04:54:05.058978 kubelet[2480]: E1105 04:54:05.058921 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="800ms" Nov 5 04:54:05.075760 containerd[1646]: time="2025-11-05T04:54:05.075623879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:edea3c35b2b2f22f72ebade0afad9ca2,Namespace:kube-system,Attempt:0,} returns sandbox id \"916898a3e9236c3be474131c7287fcff706661f008a5f594584f4026aaa01bee\"" Nov 5 04:54:05.078886 kubelet[2480]: E1105 04:54:05.077404 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:05.083140 containerd[1646]: time="2025-11-05T04:54:05.083107640Z" level=info msg="CreateContainer within sandbox \"916898a3e9236c3be474131c7287fcff706661f008a5f594584f4026aaa01bee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 04:54:05.089118 containerd[1646]: time="2025-11-05T04:54:05.089044636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc9d82f522a998e1d521cb2e255512f21a6af7612638f5ab5a0d8220dfc38813\"" Nov 5 04:54:05.090058 kubelet[2480]: E1105 04:54:05.090028 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:05.094687 containerd[1646]: time="2025-11-05T04:54:05.094652242Z" level=info msg="CreateContainer within sandbox \"dc9d82f522a998e1d521cb2e255512f21a6af7612638f5ab5a0d8220dfc38813\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 04:54:05.096950 containerd[1646]: time="2025-11-05T04:54:05.096913994Z" level=info msg="Container 326ea34fd48c2c6a854b96966e74586f6afeab5fd7dc11bc16773bbce490e574: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:54:05.106361 containerd[1646]: time="2025-11-05T04:54:05.106317706Z" level=info msg="Container 323d237392f0eb36a89a137489499f7b75b816f386696996d8bf3ae5b8471939: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:54:05.109298 containerd[1646]: time="2025-11-05T04:54:05.109240551Z" level=info msg="CreateContainer within sandbox \"916898a3e9236c3be474131c7287fcff706661f008a5f594584f4026aaa01bee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"326ea34fd48c2c6a854b96966e74586f6afeab5fd7dc11bc16773bbce490e574\"" Nov 5 04:54:05.110468 containerd[1646]: time="2025-11-05T04:54:05.110424945Z" level=info msg="StartContainer for \"326ea34fd48c2c6a854b96966e74586f6afeab5fd7dc11bc16773bbce490e574\"" Nov 5 04:54:05.112543 containerd[1646]: time="2025-11-05T04:54:05.112514718Z" level=info msg="connecting to shim 326ea34fd48c2c6a854b96966e74586f6afeab5fd7dc11bc16773bbce490e574" address="unix:///run/containerd/s/6a71ff505339f808f9cdc69ecc089cee9d3ebd33a680daa6f0d02f3105496f16" protocol=ttrpc version=3 Nov 5 04:54:05.115274 containerd[1646]: time="2025-11-05T04:54:05.115221901Z" level=info msg="CreateContainer within sandbox \"dc9d82f522a998e1d521cb2e255512f21a6af7612638f5ab5a0d8220dfc38813\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"323d237392f0eb36a89a137489499f7b75b816f386696996d8bf3ae5b8471939\"" Nov 5 04:54:05.116973 containerd[1646]: time="2025-11-05T04:54:05.116721207Z" level=info msg="StartContainer for \"323d237392f0eb36a89a137489499f7b75b816f386696996d8bf3ae5b8471939\"" Nov 5 04:54:05.118791 containerd[1646]: time="2025-11-05T04:54:05.118751667Z" level=info msg="connecting to shim 323d237392f0eb36a89a137489499f7b75b816f386696996d8bf3ae5b8471939" address="unix:///run/containerd/s/57aa95f3a7507cfc175a0783d07574f1ee9fc34a8d61f777ad0eb44e664d2c83" protocol=ttrpc version=3 Nov 5 04:54:05.119943 containerd[1646]: time="2025-11-05T04:54:05.119831610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3210cb7fb43f2658ac143ab2b6af002eecc9b458ce27bf99225f50e9f6997e0\"" Nov 5 04:54:05.121273 kubelet[2480]: E1105 04:54:05.121234 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:05.128424 containerd[1646]: time="2025-11-05T04:54:05.127953683Z" level=info msg="CreateContainer within sandbox \"f3210cb7fb43f2658ac143ab2b6af002eecc9b458ce27bf99225f50e9f6997e0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 04:54:05.138017 systemd[1]: Started cri-containerd-326ea34fd48c2c6a854b96966e74586f6afeab5fd7dc11bc16773bbce490e574.scope - libcontainer container 326ea34fd48c2c6a854b96966e74586f6afeab5fd7dc11bc16773bbce490e574. Nov 5 04:54:05.139256 containerd[1646]: time="2025-11-05T04:54:05.138990835Z" level=info msg="Container bca60385105e59d1a8d6f17f5b2c523563f25d3875d9c6b846bde05160506bf8: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:54:05.149433 containerd[1646]: time="2025-11-05T04:54:05.149379578Z" level=info msg="CreateContainer within sandbox \"f3210cb7fb43f2658ac143ab2b6af002eecc9b458ce27bf99225f50e9f6997e0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bca60385105e59d1a8d6f17f5b2c523563f25d3875d9c6b846bde05160506bf8\"" Nov 5 04:54:05.150019 containerd[1646]: time="2025-11-05T04:54:05.149988001Z" level=info msg="StartContainer for \"bca60385105e59d1a8d6f17f5b2c523563f25d3875d9c6b846bde05160506bf8\"" Nov 5 04:54:05.151276 containerd[1646]: time="2025-11-05T04:54:05.151242027Z" level=info msg="connecting to shim bca60385105e59d1a8d6f17f5b2c523563f25d3875d9c6b846bde05160506bf8" address="unix:///run/containerd/s/65fadff05730e1f76a01094ce8d145e192de82795b42fbc9c59ec57fbb2be559" protocol=ttrpc version=3 Nov 5 04:54:05.159123 systemd[1]: Started cri-containerd-323d237392f0eb36a89a137489499f7b75b816f386696996d8bf3ae5b8471939.scope - libcontainer container 323d237392f0eb36a89a137489499f7b75b816f386696996d8bf3ae5b8471939. Nov 5 04:54:05.186134 systemd[1]: Started cri-containerd-bca60385105e59d1a8d6f17f5b2c523563f25d3875d9c6b846bde05160506bf8.scope - libcontainer container bca60385105e59d1a8d6f17f5b2c523563f25d3875d9c6b846bde05160506bf8. Nov 5 04:54:05.221279 containerd[1646]: time="2025-11-05T04:54:05.221195780Z" level=info msg="StartContainer for \"326ea34fd48c2c6a854b96966e74586f6afeab5fd7dc11bc16773bbce490e574\" returns successfully" Nov 5 04:54:05.233266 kubelet[2480]: I1105 04:54:05.233230 2480 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:54:05.233591 kubelet[2480]: E1105 04:54:05.233550 2480 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Nov 5 04:54:05.262291 containerd[1646]: time="2025-11-05T04:54:05.262239627Z" level=info msg="StartContainer for \"323d237392f0eb36a89a137489499f7b75b816f386696996d8bf3ae5b8471939\" returns successfully" Nov 5 04:54:05.267812 containerd[1646]: time="2025-11-05T04:54:05.267747742Z" level=info msg="StartContainer for \"bca60385105e59d1a8d6f17f5b2c523563f25d3875d9c6b846bde05160506bf8\" returns successfully" Nov 5 04:54:05.502918 kubelet[2480]: E1105 04:54:05.502188 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:54:05.502918 kubelet[2480]: E1105 04:54:05.502527 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:05.506934 kubelet[2480]: E1105 04:54:05.506902 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:54:05.507942 kubelet[2480]: E1105 04:54:05.507920 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:05.511603 kubelet[2480]: E1105 04:54:05.511578 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:54:05.511954 kubelet[2480]: E1105 04:54:05.511848 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:06.035890 kubelet[2480]: I1105 04:54:06.035392 2480 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:54:06.514266 kubelet[2480]: E1105 04:54:06.514028 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:54:06.514266 kubelet[2480]: E1105 04:54:06.514196 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:06.514559 kubelet[2480]: E1105 04:54:06.514544 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:54:06.514712 kubelet[2480]: E1105 04:54:06.514698 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:06.609577 kubelet[2480]: E1105 04:54:06.609509 2480 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 5 04:54:06.688554 kubelet[2480]: I1105 04:54:06.687742 2480 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 04:54:06.757135 kubelet[2480]: I1105 04:54:06.757078 2480 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 04:54:06.762233 kubelet[2480]: E1105 04:54:06.762205 2480 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 5 04:54:06.762233 kubelet[2480]: I1105 04:54:06.762233 2480 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 04:54:06.763665 kubelet[2480]: E1105 04:54:06.763630 2480 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 04:54:06.763665 kubelet[2480]: I1105 04:54:06.763649 2480 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 04:54:06.765176 kubelet[2480]: E1105 04:54:06.765029 2480 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 04:54:07.446283 kubelet[2480]: I1105 04:54:07.446244 2480 apiserver.go:52] "Watching apiserver" Nov 5 04:54:07.456379 kubelet[2480]: I1105 04:54:07.456349 2480 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 04:54:07.514360 kubelet[2480]: I1105 04:54:07.514314 2480 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 04:54:07.516205 kubelet[2480]: E1105 04:54:07.516150 2480 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 04:54:07.516410 kubelet[2480]: E1105 04:54:07.516383 2480 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:08.540144 systemd[1]: Reload requested from client PID 2764 ('systemctl') (unit session-9.scope)... Nov 5 04:54:08.540160 systemd[1]: Reloading... Nov 5 04:54:08.615895 zram_generator::config[2808]: No configuration found. Nov 5 04:54:08.847175 systemd[1]: Reloading finished in 306 ms. Nov 5 04:54:08.870771 kubelet[2480]: I1105 04:54:08.870682 2480 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 04:54:08.870799 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:54:08.889021 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 04:54:08.889315 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:54:08.889365 systemd[1]: kubelet.service: Consumed 1.004s CPU time, 130.9M memory peak. Nov 5 04:54:08.891227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:54:09.117125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:54:09.128314 (kubelet)[2853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 04:54:09.167392 kubelet[2853]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 04:54:09.167392 kubelet[2853]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 04:54:09.167392 kubelet[2853]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 04:54:09.167820 kubelet[2853]: I1105 04:54:09.167443 2853 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 04:54:09.173836 kubelet[2853]: I1105 04:54:09.173790 2853 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 04:54:09.173836 kubelet[2853]: I1105 04:54:09.173813 2853 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 04:54:09.174046 kubelet[2853]: I1105 04:54:09.174031 2853 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 04:54:09.175066 kubelet[2853]: I1105 04:54:09.175036 2853 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 04:54:09.176843 kubelet[2853]: I1105 04:54:09.176820 2853 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 04:54:09.181898 kubelet[2853]: I1105 04:54:09.181872 2853 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 04:54:09.186757 kubelet[2853]: I1105 04:54:09.186731 2853 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 04:54:09.186993 kubelet[2853]: I1105 04:54:09.186960 2853 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 04:54:09.187098 kubelet[2853]: I1105 04:54:09.186981 2853 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 04:54:09.187193 kubelet[2853]: I1105 04:54:09.187107 2853 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 04:54:09.187193 kubelet[2853]: I1105 04:54:09.187116 2853 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 04:54:09.187193 kubelet[2853]: I1105 04:54:09.187157 2853 state_mem.go:36] "Initialized new in-memory state store" Nov 5 04:54:09.187332 kubelet[2853]: I1105 04:54:09.187318 2853 kubelet.go:480] "Attempting to sync node with API server" Nov 5 04:54:09.187332 kubelet[2853]: I1105 04:54:09.187331 2853 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 04:54:09.187373 kubelet[2853]: I1105 04:54:09.187350 2853 kubelet.go:386] "Adding apiserver pod source" Nov 5 04:54:09.187373 kubelet[2853]: I1105 04:54:09.187363 2853 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 04:54:09.190167 kubelet[2853]: I1105 04:54:09.190134 2853 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 5 04:54:09.190750 kubelet[2853]: I1105 04:54:09.190704 2853 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 04:54:09.197170 kubelet[2853]: I1105 04:54:09.197140 2853 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 04:54:09.197310 kubelet[2853]: I1105 04:54:09.197197 2853 server.go:1289] "Started kubelet" Nov 5 04:54:09.197473 kubelet[2853]: I1105 04:54:09.197429 2853 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 04:54:09.197529 kubelet[2853]: I1105 04:54:09.197477 2853 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 04:54:09.197841 kubelet[2853]: I1105 04:54:09.197824 2853 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 04:54:09.198253 kubelet[2853]: I1105 04:54:09.198235 2853 server.go:317] "Adding debug handlers to kubelet server" Nov 5 04:54:09.202276 kubelet[2853]: I1105 04:54:09.202234 2853 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 04:54:09.205619 kubelet[2853]: I1105 04:54:09.205537 2853 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 04:54:09.206078 kubelet[2853]: I1105 04:54:09.206056 2853 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 04:54:09.206753 kubelet[2853]: I1105 04:54:09.206725 2853 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 04:54:09.206924 kubelet[2853]: I1105 04:54:09.206905 2853 reconciler.go:26] "Reconciler: start to sync state" Nov 5 04:54:09.212052 kubelet[2853]: I1105 04:54:09.212016 2853 factory.go:223] Registration of the containerd container factory successfully Nov 5 04:54:09.212052 kubelet[2853]: I1105 04:54:09.212041 2853 factory.go:223] Registration of the systemd container factory successfully Nov 5 04:54:09.212218 kubelet[2853]: I1105 04:54:09.212129 2853 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 04:54:09.221733 kubelet[2853]: I1105 04:54:09.221669 2853 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 04:54:09.223398 kubelet[2853]: I1105 04:54:09.222966 2853 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 04:54:09.223398 kubelet[2853]: I1105 04:54:09.222992 2853 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 04:54:09.223398 kubelet[2853]: I1105 04:54:09.223014 2853 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 04:54:09.223398 kubelet[2853]: I1105 04:54:09.223023 2853 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 04:54:09.223398 kubelet[2853]: E1105 04:54:09.223071 2853 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 04:54:09.248955 kubelet[2853]: I1105 04:54:09.248921 2853 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 04:54:09.248955 kubelet[2853]: I1105 04:54:09.248935 2853 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 04:54:09.248955 kubelet[2853]: I1105 04:54:09.248953 2853 state_mem.go:36] "Initialized new in-memory state store" Nov 5 04:54:09.249136 kubelet[2853]: I1105 04:54:09.249070 2853 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 04:54:09.249136 kubelet[2853]: I1105 04:54:09.249079 2853 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 04:54:09.249136 kubelet[2853]: I1105 04:54:09.249093 2853 policy_none.go:49] "None policy: Start" Nov 5 04:54:09.249136 kubelet[2853]: I1105 04:54:09.249102 2853 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 04:54:09.249136 kubelet[2853]: I1105 04:54:09.249111 2853 state_mem.go:35] "Initializing new in-memory state store" Nov 5 04:54:09.249224 kubelet[2853]: I1105 04:54:09.249191 2853 state_mem.go:75] "Updated machine memory state" Nov 5 04:54:09.253114 kubelet[2853]: E1105 04:54:09.253073 2853 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 04:54:09.253304 kubelet[2853]: I1105 04:54:09.253290 2853 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 04:54:09.253340 kubelet[2853]: I1105 04:54:09.253305 2853 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 04:54:09.254007 kubelet[2853]: I1105 04:54:09.253825 2853 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 04:54:09.255324 kubelet[2853]: E1105 04:54:09.255305 2853 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 04:54:09.324092 kubelet[2853]: I1105 04:54:09.324046 2853 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 04:54:09.324092 kubelet[2853]: I1105 04:54:09.324089 2853 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 04:54:09.324315 kubelet[2853]: I1105 04:54:09.324117 2853 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 04:54:09.360888 kubelet[2853]: I1105 04:54:09.360844 2853 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:54:09.366980 kubelet[2853]: I1105 04:54:09.366949 2853 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 04:54:09.367070 kubelet[2853]: I1105 04:54:09.367041 2853 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 04:54:09.408260 kubelet[2853]: I1105 04:54:09.408145 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:54:09.408260 kubelet[2853]: I1105 04:54:09.408175 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:54:09.408260 kubelet[2853]: I1105 04:54:09.408192 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:54:09.408260 kubelet[2853]: I1105 04:54:09.408207 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 5 04:54:09.408260 kubelet[2853]: I1105 04:54:09.408223 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edea3c35b2b2f22f72ebade0afad9ca2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"edea3c35b2b2f22f72ebade0afad9ca2\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:54:09.408469 kubelet[2853]: I1105 04:54:09.408262 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:54:09.408469 kubelet[2853]: I1105 04:54:09.408277 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:54:09.408469 kubelet[2853]: I1105 04:54:09.408295 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edea3c35b2b2f22f72ebade0afad9ca2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"edea3c35b2b2f22f72ebade0afad9ca2\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:54:09.408469 kubelet[2853]: I1105 04:54:09.408326 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edea3c35b2b2f22f72ebade0afad9ca2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"edea3c35b2b2f22f72ebade0afad9ca2\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:54:09.630490 kubelet[2853]: E1105 04:54:09.630447 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:09.632553 kubelet[2853]: E1105 04:54:09.632517 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:09.632622 kubelet[2853]: E1105 04:54:09.632521 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:10.188498 kubelet[2853]: I1105 04:54:10.188433 2853 apiserver.go:52] "Watching apiserver" Nov 5 04:54:10.207468 kubelet[2853]: I1105 04:54:10.207424 2853 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 04:54:10.236164 kubelet[2853]: I1105 04:54:10.236129 2853 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 04:54:10.236406 kubelet[2853]: E1105 04:54:10.236313 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:10.236406 kubelet[2853]: E1105 04:54:10.236372 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:10.245846 kubelet[2853]: E1105 04:54:10.245794 2853 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 04:54:10.246038 kubelet[2853]: E1105 04:54:10.245975 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:10.253403 kubelet[2853]: I1105 04:54:10.252957 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.252937461 podStartE2EDuration="1.252937461s" podCreationTimestamp="2025-11-05 04:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:54:10.252897134 +0000 UTC m=+1.120061681" watchObservedRunningTime="2025-11-05 04:54:10.252937461 +0000 UTC m=+1.120102008" Nov 5 04:54:10.258497 kubelet[2853]: I1105 04:54:10.258448 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.258438619 podStartE2EDuration="1.258438619s" podCreationTimestamp="2025-11-05 04:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:54:10.258433418 +0000 UTC m=+1.125597965" watchObservedRunningTime="2025-11-05 04:54:10.258438619 +0000 UTC m=+1.125603166" Nov 5 04:54:10.264498 kubelet[2853]: I1105 04:54:10.264437 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.264427624 podStartE2EDuration="1.264427624s" podCreationTimestamp="2025-11-05 04:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:54:10.264192958 +0000 UTC m=+1.131357505" watchObservedRunningTime="2025-11-05 04:54:10.264427624 +0000 UTC m=+1.131592171" Nov 5 04:54:11.237130 kubelet[2853]: E1105 04:54:11.237091 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:11.237776 kubelet[2853]: E1105 04:54:11.237197 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:11.603447 update_engine[1634]: I20251105 04:54:11.603307 1634 update_attempter.cc:509] Updating boot flags... Nov 5 04:54:12.239033 kubelet[2853]: E1105 04:54:12.238984 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:13.068853 kubelet[2853]: E1105 04:54:13.068801 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:13.744486 kubelet[2853]: E1105 04:54:13.744419 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:14.930538 kubelet[2853]: I1105 04:54:14.930503 2853 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 04:54:14.930989 kubelet[2853]: I1105 04:54:14.930976 2853 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 04:54:14.931028 containerd[1646]: time="2025-11-05T04:54:14.930813669Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 04:54:15.748903 systemd[1]: Created slice kubepods-besteffort-podfdd0456f_0423_489a_81d7_d937c955f520.slice - libcontainer container kubepods-besteffort-podfdd0456f_0423_489a_81d7_d937c955f520.slice. Nov 5 04:54:15.848504 kubelet[2853]: I1105 04:54:15.848438 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdd0456f-0423-489a-81d7-d937c955f520-lib-modules\") pod \"kube-proxy-wmgx2\" (UID: \"fdd0456f-0423-489a-81d7-d937c955f520\") " pod="kube-system/kube-proxy-wmgx2" Nov 5 04:54:15.848504 kubelet[2853]: I1105 04:54:15.848512 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7knc\" (UniqueName: \"kubernetes.io/projected/fdd0456f-0423-489a-81d7-d937c955f520-kube-api-access-c7knc\") pod \"kube-proxy-wmgx2\" (UID: \"fdd0456f-0423-489a-81d7-d937c955f520\") " pod="kube-system/kube-proxy-wmgx2" Nov 5 04:54:15.848720 kubelet[2853]: I1105 04:54:15.848558 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fdd0456f-0423-489a-81d7-d937c955f520-kube-proxy\") pod \"kube-proxy-wmgx2\" (UID: \"fdd0456f-0423-489a-81d7-d937c955f520\") " pod="kube-system/kube-proxy-wmgx2" Nov 5 04:54:15.848720 kubelet[2853]: I1105 04:54:15.848593 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdd0456f-0423-489a-81d7-d937c955f520-xtables-lock\") pod \"kube-proxy-wmgx2\" (UID: \"fdd0456f-0423-489a-81d7-d937c955f520\") " pod="kube-system/kube-proxy-wmgx2" Nov 5 04:54:15.953882 kubelet[2853]: E1105 04:54:15.953828 2853 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 5 04:54:15.953882 kubelet[2853]: E1105 04:54:15.953873 2853 projected.go:194] Error preparing data for projected volume kube-api-access-c7knc for pod kube-system/kube-proxy-wmgx2: configmap "kube-root-ca.crt" not found Nov 5 04:54:15.954362 kubelet[2853]: E1105 04:54:15.953927 2853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdd0456f-0423-489a-81d7-d937c955f520-kube-api-access-c7knc podName:fdd0456f-0423-489a-81d7-d937c955f520 nodeName:}" failed. No retries permitted until 2025-11-05 04:54:16.45390781 +0000 UTC m=+7.321072357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c7knc" (UniqueName: "kubernetes.io/projected/fdd0456f-0423-489a-81d7-d937c955f520-kube-api-access-c7knc") pod "kube-proxy-wmgx2" (UID: "fdd0456f-0423-489a-81d7-d937c955f520") : configmap "kube-root-ca.crt" not found Nov 5 04:54:16.159759 systemd[1]: Created slice kubepods-besteffort-pod6353df7f_fec9_42c7_8fa1_44e82d3ece71.slice - libcontainer container kubepods-besteffort-pod6353df7f_fec9_42c7_8fa1_44e82d3ece71.slice. Nov 5 04:54:16.252463 kubelet[2853]: I1105 04:54:16.252400 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6353df7f-fec9-42c7-8fa1-44e82d3ece71-var-lib-calico\") pod \"tigera-operator-7dcd859c48-4j2lz\" (UID: \"6353df7f-fec9-42c7-8fa1-44e82d3ece71\") " pod="tigera-operator/tigera-operator-7dcd859c48-4j2lz" Nov 5 04:54:16.252463 kubelet[2853]: I1105 04:54:16.252437 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67pb6\" (UniqueName: \"kubernetes.io/projected/6353df7f-fec9-42c7-8fa1-44e82d3ece71-kube-api-access-67pb6\") pod \"tigera-operator-7dcd859c48-4j2lz\" (UID: \"6353df7f-fec9-42c7-8fa1-44e82d3ece71\") " pod="tigera-operator/tigera-operator-7dcd859c48-4j2lz" Nov 5 04:54:16.464166 containerd[1646]: time="2025-11-05T04:54:16.464032025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4j2lz,Uid:6353df7f-fec9-42c7-8fa1-44e82d3ece71,Namespace:tigera-operator,Attempt:0,}" Nov 5 04:54:16.509877 containerd[1646]: time="2025-11-05T04:54:16.508817021Z" level=info msg="connecting to shim f9d7c70244972f1dd3ab653d83058a33485e821145f2d8e803b974e916e69a57" address="unix:///run/containerd/s/6d1be548a9a9938ee7e41509fe3dd8dbe842c1260c7319ccfc558ac6995ce9ca" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:54:16.568017 systemd[1]: Started cri-containerd-f9d7c70244972f1dd3ab653d83058a33485e821145f2d8e803b974e916e69a57.scope - libcontainer container f9d7c70244972f1dd3ab653d83058a33485e821145f2d8e803b974e916e69a57. Nov 5 04:54:16.610930 containerd[1646]: time="2025-11-05T04:54:16.610873725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4j2lz,Uid:6353df7f-fec9-42c7-8fa1-44e82d3ece71,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f9d7c70244972f1dd3ab653d83058a33485e821145f2d8e803b974e916e69a57\"" Nov 5 04:54:16.612677 containerd[1646]: time="2025-11-05T04:54:16.612642973Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 04:54:16.662310 kubelet[2853]: E1105 04:54:16.662241 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:16.663004 containerd[1646]: time="2025-11-05T04:54:16.662947455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wmgx2,Uid:fdd0456f-0423-489a-81d7-d937c955f520,Namespace:kube-system,Attempt:0,}" Nov 5 04:54:16.688717 containerd[1646]: time="2025-11-05T04:54:16.688640976Z" level=info msg="connecting to shim 170922ac50f6c9789ab7a9ad695bf27942cc232004e5a46639a2b9599bde3f9c" address="unix:///run/containerd/s/32605a33d9fac4a2e365739d6aba814607ed7ca71b6a8c20a812614b45507ff8" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:54:16.716999 systemd[1]: Started cri-containerd-170922ac50f6c9789ab7a9ad695bf27942cc232004e5a46639a2b9599bde3f9c.scope - libcontainer container 170922ac50f6c9789ab7a9ad695bf27942cc232004e5a46639a2b9599bde3f9c. Nov 5 04:54:16.749271 containerd[1646]: time="2025-11-05T04:54:16.749216738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wmgx2,Uid:fdd0456f-0423-489a-81d7-d937c955f520,Namespace:kube-system,Attempt:0,} returns sandbox id \"170922ac50f6c9789ab7a9ad695bf27942cc232004e5a46639a2b9599bde3f9c\"" Nov 5 04:54:16.750239 kubelet[2853]: E1105 04:54:16.750201 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:16.755704 containerd[1646]: time="2025-11-05T04:54:16.755651267Z" level=info msg="CreateContainer within sandbox \"170922ac50f6c9789ab7a9ad695bf27942cc232004e5a46639a2b9599bde3f9c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 04:54:16.766326 containerd[1646]: time="2025-11-05T04:54:16.766292458Z" level=info msg="Container c7a7c516fbd9e965f351608e2401e8d0a9d3d6c7db600b50a64a64045ded71c3: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:54:16.779885 containerd[1646]: time="2025-11-05T04:54:16.779808540Z" level=info msg="CreateContainer within sandbox \"170922ac50f6c9789ab7a9ad695bf27942cc232004e5a46639a2b9599bde3f9c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c7a7c516fbd9e965f351608e2401e8d0a9d3d6c7db600b50a64a64045ded71c3\"" Nov 5 04:54:16.780929 containerd[1646]: time="2025-11-05T04:54:16.780818592Z" level=info msg="StartContainer for \"c7a7c516fbd9e965f351608e2401e8d0a9d3d6c7db600b50a64a64045ded71c3\"" Nov 5 04:54:16.782496 containerd[1646]: time="2025-11-05T04:54:16.782450841Z" level=info msg="connecting to shim c7a7c516fbd9e965f351608e2401e8d0a9d3d6c7db600b50a64a64045ded71c3" address="unix:///run/containerd/s/32605a33d9fac4a2e365739d6aba814607ed7ca71b6a8c20a812614b45507ff8" protocol=ttrpc version=3 Nov 5 04:54:16.808028 systemd[1]: Started cri-containerd-c7a7c516fbd9e965f351608e2401e8d0a9d3d6c7db600b50a64a64045ded71c3.scope - libcontainer container c7a7c516fbd9e965f351608e2401e8d0a9d3d6c7db600b50a64a64045ded71c3. Nov 5 04:54:16.854265 containerd[1646]: time="2025-11-05T04:54:16.854215311Z" level=info msg="StartContainer for \"c7a7c516fbd9e965f351608e2401e8d0a9d3d6c7db600b50a64a64045ded71c3\" returns successfully" Nov 5 04:54:17.248455 kubelet[2853]: E1105 04:54:17.248198 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:17.259130 kubelet[2853]: I1105 04:54:17.259019 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wmgx2" podStartSLOduration=2.258840996 podStartE2EDuration="2.258840996s" podCreationTimestamp="2025-11-05 04:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:54:17.258625588 +0000 UTC m=+8.125790155" watchObservedRunningTime="2025-11-05 04:54:17.258840996 +0000 UTC m=+8.126005563" Nov 5 04:54:18.001593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2862965536.mount: Deactivated successfully. Nov 5 04:54:18.438125 containerd[1646]: time="2025-11-05T04:54:18.437977606Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:54:18.438877 containerd[1646]: time="2025-11-05T04:54:18.438799851Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Nov 5 04:54:18.440113 containerd[1646]: time="2025-11-05T04:54:18.440065433Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:54:18.442256 containerd[1646]: time="2025-11-05T04:54:18.442214195Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:54:18.442774 containerd[1646]: time="2025-11-05T04:54:18.442736583Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.830061529s" Nov 5 04:54:18.442774 containerd[1646]: time="2025-11-05T04:54:18.442764896Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 04:54:18.447370 containerd[1646]: time="2025-11-05T04:54:18.447315239Z" level=info msg="CreateContainer within sandbox \"f9d7c70244972f1dd3ab653d83058a33485e821145f2d8e803b974e916e69a57\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 04:54:18.455680 containerd[1646]: time="2025-11-05T04:54:18.455635409Z" level=info msg="Container 4cc9bf4078e81cb464577a19277204236d7c88e5b84e2518c93420b93d45053b: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:54:18.459120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711181884.mount: Deactivated successfully. Nov 5 04:54:18.463166 containerd[1646]: time="2025-11-05T04:54:18.463137652Z" level=info msg="CreateContainer within sandbox \"f9d7c70244972f1dd3ab653d83058a33485e821145f2d8e803b974e916e69a57\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4cc9bf4078e81cb464577a19277204236d7c88e5b84e2518c93420b93d45053b\"" Nov 5 04:54:18.464567 containerd[1646]: time="2025-11-05T04:54:18.463611970Z" level=info msg="StartContainer for \"4cc9bf4078e81cb464577a19277204236d7c88e5b84e2518c93420b93d45053b\"" Nov 5 04:54:18.464567 containerd[1646]: time="2025-11-05T04:54:18.464464722Z" level=info msg="connecting to shim 4cc9bf4078e81cb464577a19277204236d7c88e5b84e2518c93420b93d45053b" address="unix:///run/containerd/s/6d1be548a9a9938ee7e41509fe3dd8dbe842c1260c7319ccfc558ac6995ce9ca" protocol=ttrpc version=3 Nov 5 04:54:18.488009 systemd[1]: Started cri-containerd-4cc9bf4078e81cb464577a19277204236d7c88e5b84e2518c93420b93d45053b.scope - libcontainer container 4cc9bf4078e81cb464577a19277204236d7c88e5b84e2518c93420b93d45053b. Nov 5 04:54:18.524375 containerd[1646]: time="2025-11-05T04:54:18.524298281Z" level=info msg="StartContainer for \"4cc9bf4078e81cb464577a19277204236d7c88e5b84e2518c93420b93d45053b\" returns successfully" Nov 5 04:54:19.264515 kubelet[2853]: I1105 04:54:19.264428 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-4j2lz" podStartSLOduration=1.433052117 podStartE2EDuration="3.264412603s" podCreationTimestamp="2025-11-05 04:54:16 +0000 UTC" firstStartedPulling="2025-11-05 04:54:16.612170639 +0000 UTC m=+7.479335186" lastFinishedPulling="2025-11-05 04:54:18.443531125 +0000 UTC m=+9.310695672" observedRunningTime="2025-11-05 04:54:19.263955829 +0000 UTC m=+10.131120376" watchObservedRunningTime="2025-11-05 04:54:19.264412603 +0000 UTC m=+10.131577150" Nov 5 04:54:19.431217 kubelet[2853]: E1105 04:54:19.431166 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:20.256243 kubelet[2853]: E1105 04:54:20.256187 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:21.258367 kubelet[2853]: E1105 04:54:21.258305 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:23.072778 kubelet[2853]: E1105 04:54:23.072738 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:23.855383 kubelet[2853]: E1105 04:54:23.855330 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:24.263262 kubelet[2853]: E1105 04:54:24.263224 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:24.551544 sudo[1880]: pam_unix(sudo:session): session closed for user root Nov 5 04:54:24.553971 sshd[1879]: Connection closed by 10.0.0.1 port 59430 Nov 5 04:54:24.557194 sshd-session[1873]: pam_unix(sshd:session): session closed for user core Nov 5 04:54:24.567036 systemd-logind[1631]: Session 9 logged out. Waiting for processes to exit. Nov 5 04:54:24.568323 systemd[1]: sshd@8-10.0.0.99:22-10.0.0.1:59430.service: Deactivated successfully. Nov 5 04:54:24.575069 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 04:54:24.576169 systemd[1]: session-9.scope: Consumed 6.491s CPU time, 214.8M memory peak. Nov 5 04:54:24.583010 systemd-logind[1631]: Removed session 9. Nov 5 04:54:28.526049 systemd[1]: Created slice kubepods-besteffort-podd55b3f95_a283_4544_8255_e0fecada5bff.slice - libcontainer container kubepods-besteffort-podd55b3f95_a283_4544_8255_e0fecada5bff.slice. Nov 5 04:54:28.626742 kubelet[2853]: I1105 04:54:28.626686 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d55b3f95-a283-4544-8255-e0fecada5bff-tigera-ca-bundle\") pod \"calico-typha-7897845f64-xk5xh\" (UID: \"d55b3f95-a283-4544-8255-e0fecada5bff\") " pod="calico-system/calico-typha-7897845f64-xk5xh" Nov 5 04:54:28.626742 kubelet[2853]: I1105 04:54:28.626731 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d55b3f95-a283-4544-8255-e0fecada5bff-typha-certs\") pod \"calico-typha-7897845f64-xk5xh\" (UID: \"d55b3f95-a283-4544-8255-e0fecada5bff\") " pod="calico-system/calico-typha-7897845f64-xk5xh" Nov 5 04:54:28.626742 kubelet[2853]: I1105 04:54:28.626755 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf9d9\" (UniqueName: \"kubernetes.io/projected/d55b3f95-a283-4544-8255-e0fecada5bff-kube-api-access-cf9d9\") pod \"calico-typha-7897845f64-xk5xh\" (UID: \"d55b3f95-a283-4544-8255-e0fecada5bff\") " pod="calico-system/calico-typha-7897845f64-xk5xh" Nov 5 04:54:28.702917 systemd[1]: Created slice kubepods-besteffort-podf2490ade_c8ae_4305_b6ab_87c3d3cae0a1.slice - libcontainer container kubepods-besteffort-podf2490ade_c8ae_4305_b6ab_87c3d3cae0a1.slice. Nov 5 04:54:28.727535 kubelet[2853]: I1105 04:54:28.727475 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f2490ade-c8ae-4305-b6ab-87c3d3cae0a1-cni-bin-dir\") pod \"calico-node-lc7sq\" (UID: \"f2490ade-c8ae-4305-b6ab-87c3d3cae0a1\") " pod="calico-system/calico-node-lc7sq" Nov 5 04:54:28.728040 kubelet[2853]: I1105 04:54:28.727912 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2490ade-c8ae-4305-b6ab-87c3d3cae0a1-tigera-ca-bundle\") pod \"calico-node-lc7sq\" (UID: \"f2490ade-c8ae-4305-b6ab-87c3d3cae0a1\") " pod="calico-system/calico-node-lc7sq" Nov 5 04:54:28.728040 kubelet[2853]: I1105 04:54:28.727963 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2490ade-c8ae-4305-b6ab-87c3d3cae0a1-xtables-lock\") pod \"calico-node-lc7sq\" (UID: \"f2490ade-c8ae-4305-b6ab-87c3d3cae0a1\") " pod="calico-system/calico-node-lc7sq" Nov 5 04:54:28.728040 kubelet[2853]: I1105 04:54:28.727982 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjhbn\" (UniqueName: \"kubernetes.io/projected/f2490ade-c8ae-4305-b6ab-87c3d3cae0a1-kube-api-access-wjhbn\") pod \"calico-node-lc7sq\" (UID: \"f2490ade-c8ae-4305-b6ab-87c3d3cae0a1\") " pod="calico-system/calico-node-lc7sq" Nov 5 04:54:28.728165 kubelet[2853]: I1105 04:54:28.728084 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f2490ade-c8ae-4305-b6ab-87c3d3cae0a1-cni-log-dir\") pod \"calico-node-lc7sq\" (UID: \"f2490ade-c8ae-4305-b6ab-87c3d3cae0a1\") " pod="calico-system/calico-node-lc7sq" Nov 5 04:54:28.728165 kubelet[2853]: I1105 04:54:28.728128 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f2490ade-c8ae-4305-b6ab-87c3d3cae0a1-cni-net-dir\") pod \"calico-node-lc7sq\" (UID: \"f2490ade-c8ae-4305-b6ab-87c3d3cae0a1\") " pod="calico-system/calico-node-lc7sq" Nov 5 04:54:28.728165 kubelet[2853]: I1105 04:54:28.728150 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2490ade-c8ae-4305-b6ab-87c3d3cae0a1-lib-modules\") pod \"calico-node-lc7sq\" (UID: \"f2490ade-c8ae-4305-b6ab-87c3d3cae0a1\") " pod="calico-system/calico-node-lc7sq" Nov 5 04:54:28.728239 kubelet[2853]: I1105 04:54:28.728165 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f2490ade-c8ae-4305-b6ab-87c3d3cae0a1-var-lib-calico\") pod \"calico-node-lc7sq\" (UID: \"f2490ade-c8ae-4305-b6ab-87c3d3cae0a1\") " pod="calico-system/calico-node-lc7sq" Nov 5 04:54:28.728239 kubelet[2853]: I1105 04:54:28.728230 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f2490ade-c8ae-4305-b6ab-87c3d3cae0a1-flexvol-driver-host\") pod \"calico-node-lc7sq\" (UID: \"f2490ade-c8ae-4305-b6ab-87c3d3cae0a1\") " pod="calico-system/calico-node-lc7sq" Nov 5 04:54:28.728292 kubelet[2853]: I1105 04:54:28.728252 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f2490ade-c8ae-4305-b6ab-87c3d3cae0a1-node-certs\") pod \"calico-node-lc7sq\" (UID: \"f2490ade-c8ae-4305-b6ab-87c3d3cae0a1\") " pod="calico-system/calico-node-lc7sq" Nov 5 04:54:28.728292 kubelet[2853]: I1105 04:54:28.728268 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f2490ade-c8ae-4305-b6ab-87c3d3cae0a1-var-run-calico\") pod \"calico-node-lc7sq\" (UID: \"f2490ade-c8ae-4305-b6ab-87c3d3cae0a1\") " pod="calico-system/calico-node-lc7sq" Nov 5 04:54:28.728292 kubelet[2853]: I1105 04:54:28.728284 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f2490ade-c8ae-4305-b6ab-87c3d3cae0a1-policysync\") pod \"calico-node-lc7sq\" (UID: \"f2490ade-c8ae-4305-b6ab-87c3d3cae0a1\") " pod="calico-system/calico-node-lc7sq" Nov 5 04:54:28.831387 kubelet[2853]: E1105 04:54:28.831246 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.831387 kubelet[2853]: W1105 04:54:28.831271 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.831387 kubelet[2853]: E1105 04:54:28.831338 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:28.832156 containerd[1646]: time="2025-11-05T04:54:28.832088547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7897845f64-xk5xh,Uid:d55b3f95-a283-4544-8255-e0fecada5bff,Namespace:calico-system,Attempt:0,}" Nov 5 04:54:28.833841 kubelet[2853]: E1105 04:54:28.833720 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.834293 kubelet[2853]: E1105 04:54:28.834246 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.834431 kubelet[2853]: W1105 04:54:28.834262 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.834431 kubelet[2853]: E1105 04:54:28.834394 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.839762 kubelet[2853]: E1105 04:54:28.839738 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.839762 kubelet[2853]: W1105 04:54:28.839756 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.839869 kubelet[2853]: E1105 04:54:28.839770 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.857878 containerd[1646]: time="2025-11-05T04:54:28.857174629Z" level=info msg="connecting to shim 51ec81d9e2d48066581ba16f3fe377000ba94f2406f65eca9faff1a41b9457cf" address="unix:///run/containerd/s/b26043c8cf8fa9a31590db1254df182bdd1b4063e8b0271bf812781ffd9ffd8b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:54:28.896830 kubelet[2853]: E1105 04:54:28.896478 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chq6c" podUID="2c81581c-25f7-472a-9631-e6c9dfccb268" Nov 5 04:54:28.903019 systemd[1]: Started cri-containerd-51ec81d9e2d48066581ba16f3fe377000ba94f2406f65eca9faff1a41b9457cf.scope - libcontainer container 51ec81d9e2d48066581ba16f3fe377000ba94f2406f65eca9faff1a41b9457cf. Nov 5 04:54:28.912664 kubelet[2853]: E1105 04:54:28.912637 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.912664 kubelet[2853]: W1105 04:54:28.912658 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.912783 kubelet[2853]: E1105 04:54:28.912678 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.912975 kubelet[2853]: E1105 04:54:28.912954 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.912975 kubelet[2853]: W1105 04:54:28.912969 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.913066 kubelet[2853]: E1105 04:54:28.912978 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.913202 kubelet[2853]: E1105 04:54:28.913161 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.913202 kubelet[2853]: W1105 04:54:28.913177 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.913202 kubelet[2853]: E1105 04:54:28.913201 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.913598 kubelet[2853]: E1105 04:54:28.913580 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.913598 kubelet[2853]: W1105 04:54:28.913593 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.913676 kubelet[2853]: E1105 04:54:28.913603 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.913802 kubelet[2853]: E1105 04:54:28.913780 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.913802 kubelet[2853]: W1105 04:54:28.913791 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.913802 kubelet[2853]: E1105 04:54:28.913799 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.913994 kubelet[2853]: E1105 04:54:28.913978 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.913994 kubelet[2853]: W1105 04:54:28.913990 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.914177 kubelet[2853]: E1105 04:54:28.913998 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.914314 kubelet[2853]: E1105 04:54:28.914296 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.914314 kubelet[2853]: W1105 04:54:28.914307 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.914314 kubelet[2853]: E1105 04:54:28.914315 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.914523 kubelet[2853]: E1105 04:54:28.914490 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.914523 kubelet[2853]: W1105 04:54:28.914502 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.914523 kubelet[2853]: E1105 04:54:28.914510 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.914723 kubelet[2853]: E1105 04:54:28.914707 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.914723 kubelet[2853]: W1105 04:54:28.914719 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.914940 kubelet[2853]: E1105 04:54:28.914727 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.915018 kubelet[2853]: E1105 04:54:28.915002 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.915018 kubelet[2853]: W1105 04:54:28.915014 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.915100 kubelet[2853]: E1105 04:54:28.915024 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.915235 kubelet[2853]: E1105 04:54:28.915206 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.915235 kubelet[2853]: W1105 04:54:28.915218 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.915235 kubelet[2853]: E1105 04:54:28.915225 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.915511 kubelet[2853]: E1105 04:54:28.915494 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.915511 kubelet[2853]: W1105 04:54:28.915509 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.916086 kubelet[2853]: E1105 04:54:28.916064 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.916562 kubelet[2853]: E1105 04:54:28.916536 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.916562 kubelet[2853]: W1105 04:54:28.916550 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.916562 kubelet[2853]: E1105 04:54:28.916559 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.917118 kubelet[2853]: E1105 04:54:28.917057 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.917118 kubelet[2853]: W1105 04:54:28.917098 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.917200 kubelet[2853]: E1105 04:54:28.917132 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.917644 kubelet[2853]: E1105 04:54:28.917623 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.917644 kubelet[2853]: W1105 04:54:28.917639 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.917726 kubelet[2853]: E1105 04:54:28.917650 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.918043 kubelet[2853]: E1105 04:54:28.918005 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.918043 kubelet[2853]: W1105 04:54:28.918034 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.918137 kubelet[2853]: E1105 04:54:28.918053 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.918384 kubelet[2853]: E1105 04:54:28.918338 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.918384 kubelet[2853]: W1105 04:54:28.918361 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.918384 kubelet[2853]: E1105 04:54:28.918384 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.918679 kubelet[2853]: E1105 04:54:28.918656 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.918679 kubelet[2853]: W1105 04:54:28.918673 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.918679 kubelet[2853]: E1105 04:54:28.918683 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.919129 kubelet[2853]: E1105 04:54:28.919089 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.919129 kubelet[2853]: W1105 04:54:28.919121 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.919237 kubelet[2853]: E1105 04:54:28.919133 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.919488 kubelet[2853]: E1105 04:54:28.919465 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.919488 kubelet[2853]: W1105 04:54:28.919479 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.919580 kubelet[2853]: E1105 04:54:28.919500 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.930042 kubelet[2853]: E1105 04:54:28.929892 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.930042 kubelet[2853]: W1105 04:54:28.929909 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.930042 kubelet[2853]: E1105 04:54:28.929927 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.930042 kubelet[2853]: I1105 04:54:28.929958 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2c81581c-25f7-472a-9631-e6c9dfccb268-registration-dir\") pod \"csi-node-driver-chq6c\" (UID: \"2c81581c-25f7-472a-9631-e6c9dfccb268\") " pod="calico-system/csi-node-driver-chq6c" Nov 5 04:54:28.930352 kubelet[2853]: E1105 04:54:28.930264 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.930352 kubelet[2853]: W1105 04:54:28.930290 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.930352 kubelet[2853]: E1105 04:54:28.930305 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.930352 kubelet[2853]: I1105 04:54:28.930341 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s56m\" (UniqueName: \"kubernetes.io/projected/2c81581c-25f7-472a-9631-e6c9dfccb268-kube-api-access-5s56m\") pod \"csi-node-driver-chq6c\" (UID: \"2c81581c-25f7-472a-9631-e6c9dfccb268\") " pod="calico-system/csi-node-driver-chq6c" Nov 5 04:54:28.930766 kubelet[2853]: E1105 04:54:28.930716 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.930766 kubelet[2853]: W1105 04:54:28.930742 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.930766 kubelet[2853]: E1105 04:54:28.930769 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.930954 kubelet[2853]: I1105 04:54:28.930818 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2c81581c-25f7-472a-9631-e6c9dfccb268-socket-dir\") pod \"csi-node-driver-chq6c\" (UID: \"2c81581c-25f7-472a-9631-e6c9dfccb268\") " pod="calico-system/csi-node-driver-chq6c" Nov 5 04:54:28.931385 kubelet[2853]: E1105 04:54:28.931366 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.931385 kubelet[2853]: W1105 04:54:28.931379 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.931530 kubelet[2853]: E1105 04:54:28.931397 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.932097 kubelet[2853]: E1105 04:54:28.931941 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.932097 kubelet[2853]: W1105 04:54:28.931993 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.932097 kubelet[2853]: E1105 04:54:28.932018 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.932909 kubelet[2853]: E1105 04:54:28.932459 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.932909 kubelet[2853]: W1105 04:54:28.932473 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.932909 kubelet[2853]: E1105 04:54:28.932483 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.933070 kubelet[2853]: E1105 04:54:28.933031 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.933070 kubelet[2853]: W1105 04:54:28.933041 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.933197 kubelet[2853]: E1105 04:54:28.933070 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.933236 kubelet[2853]: I1105 04:54:28.933211 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2c81581c-25f7-472a-9631-e6c9dfccb268-varrun\") pod \"csi-node-driver-chq6c\" (UID: \"2c81581c-25f7-472a-9631-e6c9dfccb268\") " pod="calico-system/csi-node-driver-chq6c" Nov 5 04:54:28.933956 kubelet[2853]: E1105 04:54:28.933723 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.933956 kubelet[2853]: W1105 04:54:28.933750 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.933956 kubelet[2853]: E1105 04:54:28.933760 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.934494 kubelet[2853]: E1105 04:54:28.934213 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.934494 kubelet[2853]: W1105 04:54:28.934223 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.934494 kubelet[2853]: E1105 04:54:28.934232 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.934843 kubelet[2853]: E1105 04:54:28.934788 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.934843 kubelet[2853]: W1105 04:54:28.934797 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.934843 kubelet[2853]: E1105 04:54:28.934812 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.935142 kubelet[2853]: E1105 04:54:28.935118 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.935142 kubelet[2853]: W1105 04:54:28.935135 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.935142 kubelet[2853]: E1105 04:54:28.935144 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.935486 kubelet[2853]: E1105 04:54:28.935422 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.935486 kubelet[2853]: W1105 04:54:28.935447 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.935486 kubelet[2853]: E1105 04:54:28.935480 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.935851 kubelet[2853]: I1105 04:54:28.935541 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c81581c-25f7-472a-9631-e6c9dfccb268-kubelet-dir\") pod \"csi-node-driver-chq6c\" (UID: \"2c81581c-25f7-472a-9631-e6c9dfccb268\") " pod="calico-system/csi-node-driver-chq6c" Nov 5 04:54:28.936006 kubelet[2853]: E1105 04:54:28.935986 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.936006 kubelet[2853]: W1105 04:54:28.936003 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.936505 kubelet[2853]: E1105 04:54:28.936015 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.936505 kubelet[2853]: E1105 04:54:28.936423 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.936505 kubelet[2853]: W1105 04:54:28.936432 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.936505 kubelet[2853]: E1105 04:54:28.936447 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:28.937156 kubelet[2853]: E1105 04:54:28.936813 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:28.937156 kubelet[2853]: W1105 04:54:28.936837 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:28.937156 kubelet[2853]: E1105 04:54:28.936907 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.008730 kubelet[2853]: E1105 04:54:29.006812 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:29.015786 containerd[1646]: time="2025-11-05T04:54:29.015739025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lc7sq,Uid:f2490ade-c8ae-4305-b6ab-87c3d3cae0a1,Namespace:calico-system,Attempt:0,}" Nov 5 04:54:29.029779 containerd[1646]: time="2025-11-05T04:54:29.029738121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7897845f64-xk5xh,Uid:d55b3f95-a283-4544-8255-e0fecada5bff,Namespace:calico-system,Attempt:0,} returns sandbox id \"51ec81d9e2d48066581ba16f3fe377000ba94f2406f65eca9faff1a41b9457cf\"" Nov 5 04:54:29.031107 kubelet[2853]: E1105 04:54:29.031070 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:29.035601 containerd[1646]: time="2025-11-05T04:54:29.035545263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 04:54:29.038320 kubelet[2853]: E1105 04:54:29.038150 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.038320 kubelet[2853]: W1105 04:54:29.038184 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.038320 kubelet[2853]: E1105 04:54:29.038215 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.038619 kubelet[2853]: E1105 04:54:29.038524 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.038619 kubelet[2853]: W1105 04:54:29.038534 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.038619 kubelet[2853]: E1105 04:54:29.038543 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.039745 kubelet[2853]: E1105 04:54:29.039702 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.039745 kubelet[2853]: W1105 04:54:29.039725 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.039745 kubelet[2853]: E1105 04:54:29.039735 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.040038 kubelet[2853]: E1105 04:54:29.040003 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.040038 kubelet[2853]: W1105 04:54:29.040027 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.040038 kubelet[2853]: E1105 04:54:29.040038 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.040573 kubelet[2853]: E1105 04:54:29.040434 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.040573 kubelet[2853]: W1105 04:54:29.040449 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.040573 kubelet[2853]: E1105 04:54:29.040460 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.043072 kubelet[2853]: E1105 04:54:29.043022 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.043072 kubelet[2853]: W1105 04:54:29.043054 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.043072 kubelet[2853]: E1105 04:54:29.043077 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.043509 kubelet[2853]: E1105 04:54:29.043478 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.043509 kubelet[2853]: W1105 04:54:29.043498 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.043691 kubelet[2853]: E1105 04:54:29.043532 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.044271 kubelet[2853]: E1105 04:54:29.044222 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.044271 kubelet[2853]: W1105 04:54:29.044253 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.047062 kubelet[2853]: E1105 04:54:29.044278 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.047062 kubelet[2853]: E1105 04:54:29.045955 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.047062 kubelet[2853]: W1105 04:54:29.045975 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.047062 kubelet[2853]: E1105 04:54:29.046004 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.048454 kubelet[2853]: E1105 04:54:29.048417 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.048838 kubelet[2853]: W1105 04:54:29.048594 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.048838 kubelet[2853]: E1105 04:54:29.048627 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.049430 kubelet[2853]: E1105 04:54:29.049266 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.049430 kubelet[2853]: W1105 04:54:29.049296 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.049430 kubelet[2853]: E1105 04:54:29.049322 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.050429 kubelet[2853]: E1105 04:54:29.050252 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.050429 kubelet[2853]: W1105 04:54:29.050282 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.050429 kubelet[2853]: E1105 04:54:29.050303 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.052699 kubelet[2853]: E1105 04:54:29.052669 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.052881 kubelet[2853]: W1105 04:54:29.052801 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.052881 kubelet[2853]: E1105 04:54:29.052829 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.058438 kubelet[2853]: E1105 04:54:29.057781 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.063223 kubelet[2853]: W1105 04:54:29.063150 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.063345 kubelet[2853]: E1105 04:54:29.063229 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.064891 kubelet[2853]: E1105 04:54:29.064280 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.064891 kubelet[2853]: W1105 04:54:29.064321 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.064891 kubelet[2853]: E1105 04:54:29.064349 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.065909 kubelet[2853]: E1105 04:54:29.065884 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.065909 kubelet[2853]: W1105 04:54:29.065901 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.065909 kubelet[2853]: E1105 04:54:29.065911 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.066265 kubelet[2853]: E1105 04:54:29.066226 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.066265 kubelet[2853]: W1105 04:54:29.066242 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.066265 kubelet[2853]: E1105 04:54:29.066251 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.066780 kubelet[2853]: E1105 04:54:29.066755 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.066780 kubelet[2853]: W1105 04:54:29.066776 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.066895 kubelet[2853]: E1105 04:54:29.066786 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.068003 kubelet[2853]: E1105 04:54:29.067944 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.068003 kubelet[2853]: W1105 04:54:29.067981 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.068003 kubelet[2853]: E1105 04:54:29.068004 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.069549 kubelet[2853]: E1105 04:54:29.069511 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.069549 kubelet[2853]: W1105 04:54:29.069538 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.069738 kubelet[2853]: E1105 04:54:29.069563 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.072792 kubelet[2853]: E1105 04:54:29.071138 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.072792 kubelet[2853]: W1105 04:54:29.071171 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.072792 kubelet[2853]: E1105 04:54:29.071213 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.072792 kubelet[2853]: E1105 04:54:29.071687 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.072792 kubelet[2853]: W1105 04:54:29.071696 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.072792 kubelet[2853]: E1105 04:54:29.071705 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.072792 kubelet[2853]: E1105 04:54:29.072034 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.072792 kubelet[2853]: W1105 04:54:29.072054 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.072792 kubelet[2853]: E1105 04:54:29.072080 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.072792 kubelet[2853]: E1105 04:54:29.072375 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.073067 kubelet[2853]: W1105 04:54:29.072386 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.073067 kubelet[2853]: E1105 04:54:29.072398 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.073067 kubelet[2853]: E1105 04:54:29.072678 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.073067 kubelet[2853]: W1105 04:54:29.072690 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.073067 kubelet[2853]: E1105 04:54:29.072701 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.077188 containerd[1646]: time="2025-11-05T04:54:29.077078536Z" level=info msg="connecting to shim e18853584fb5e68aa39639614fab8b7b40b68ce08e88592d98d3fc6a82f00773" address="unix:///run/containerd/s/e4e18d618d514c109297a3b76a9ba9746485a1636a3a1edd7a8de15ae937fc2d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:54:29.093678 kubelet[2853]: E1105 04:54:29.091183 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:29.093678 kubelet[2853]: W1105 04:54:29.091207 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:29.093678 kubelet[2853]: E1105 04:54:29.091231 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:29.123061 systemd[1]: Started cri-containerd-e18853584fb5e68aa39639614fab8b7b40b68ce08e88592d98d3fc6a82f00773.scope - libcontainer container e18853584fb5e68aa39639614fab8b7b40b68ce08e88592d98d3fc6a82f00773. Nov 5 04:54:29.152797 containerd[1646]: time="2025-11-05T04:54:29.152740822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lc7sq,Uid:f2490ade-c8ae-4305-b6ab-87c3d3cae0a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"e18853584fb5e68aa39639614fab8b7b40b68ce08e88592d98d3fc6a82f00773\"" Nov 5 04:54:29.154074 kubelet[2853]: E1105 04:54:29.154002 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:30.963513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1404106837.mount: Deactivated successfully. Nov 5 04:54:31.224830 kubelet[2853]: E1105 04:54:31.224627 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chq6c" podUID="2c81581c-25f7-472a-9631-e6c9dfccb268" Nov 5 04:54:32.024717 containerd[1646]: time="2025-11-05T04:54:32.024615875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:54:32.025570 containerd[1646]: time="2025-11-05T04:54:32.025525867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33736633" Nov 5 04:54:32.026654 containerd[1646]: time="2025-11-05T04:54:32.026605669Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:54:32.028749 containerd[1646]: time="2025-11-05T04:54:32.028701662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:54:32.029302 containerd[1646]: time="2025-11-05T04:54:32.029265042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.99367182s" Nov 5 04:54:32.029302 containerd[1646]: time="2025-11-05T04:54:32.029299878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 04:54:32.031340 containerd[1646]: time="2025-11-05T04:54:32.031294810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 04:54:32.043982 containerd[1646]: time="2025-11-05T04:54:32.043927997Z" level=info msg="CreateContainer within sandbox \"51ec81d9e2d48066581ba16f3fe377000ba94f2406f65eca9faff1a41b9457cf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 04:54:32.053889 containerd[1646]: time="2025-11-05T04:54:32.053416077Z" level=info msg="Container c6e1a8a4a8708bcb875429a7641aea34aa6a3896a7a77537020b6a77767b2806: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:54:32.062422 containerd[1646]: time="2025-11-05T04:54:32.062377196Z" level=info msg="CreateContainer within sandbox \"51ec81d9e2d48066581ba16f3fe377000ba94f2406f65eca9faff1a41b9457cf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c6e1a8a4a8708bcb875429a7641aea34aa6a3896a7a77537020b6a77767b2806\"" Nov 5 04:54:32.063883 containerd[1646]: time="2025-11-05T04:54:32.063001591Z" level=info msg="StartContainer for \"c6e1a8a4a8708bcb875429a7641aea34aa6a3896a7a77537020b6a77767b2806\"" Nov 5 04:54:32.064393 containerd[1646]: time="2025-11-05T04:54:32.064333206Z" level=info msg="connecting to shim c6e1a8a4a8708bcb875429a7641aea34aa6a3896a7a77537020b6a77767b2806" address="unix:///run/containerd/s/b26043c8cf8fa9a31590db1254df182bdd1b4063e8b0271bf812781ffd9ffd8b" protocol=ttrpc version=3 Nov 5 04:54:32.090104 systemd[1]: Started cri-containerd-c6e1a8a4a8708bcb875429a7641aea34aa6a3896a7a77537020b6a77767b2806.scope - libcontainer container c6e1a8a4a8708bcb875429a7641aea34aa6a3896a7a77537020b6a77767b2806. Nov 5 04:54:32.146284 containerd[1646]: time="2025-11-05T04:54:32.146222355Z" level=info msg="StartContainer for \"c6e1a8a4a8708bcb875429a7641aea34aa6a3896a7a77537020b6a77767b2806\" returns successfully" Nov 5 04:54:32.289785 kubelet[2853]: E1105 04:54:32.289600 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:32.351991 kubelet[2853]: E1105 04:54:32.351938 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.351991 kubelet[2853]: W1105 04:54:32.351974 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.351991 kubelet[2853]: E1105 04:54:32.352006 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.352250 kubelet[2853]: E1105 04:54:32.352203 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.352250 kubelet[2853]: W1105 04:54:32.352212 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.352250 kubelet[2853]: E1105 04:54:32.352220 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.352443 kubelet[2853]: E1105 04:54:32.352422 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.352443 kubelet[2853]: W1105 04:54:32.352434 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.352443 kubelet[2853]: E1105 04:54:32.352442 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.352744 kubelet[2853]: E1105 04:54:32.352726 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.352744 kubelet[2853]: W1105 04:54:32.352738 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.352809 kubelet[2853]: E1105 04:54:32.352747 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.352987 kubelet[2853]: E1105 04:54:32.352961 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.352987 kubelet[2853]: W1105 04:54:32.352975 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.352987 kubelet[2853]: E1105 04:54:32.352983 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.353193 kubelet[2853]: E1105 04:54:32.353177 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.353193 kubelet[2853]: W1105 04:54:32.353188 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.353243 kubelet[2853]: E1105 04:54:32.353197 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.353392 kubelet[2853]: E1105 04:54:32.353376 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.353392 kubelet[2853]: W1105 04:54:32.353388 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.353452 kubelet[2853]: E1105 04:54:32.353395 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.353584 kubelet[2853]: E1105 04:54:32.353568 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.353584 kubelet[2853]: W1105 04:54:32.353580 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.353646 kubelet[2853]: E1105 04:54:32.353588 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.353782 kubelet[2853]: E1105 04:54:32.353766 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.353782 kubelet[2853]: W1105 04:54:32.353777 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.353829 kubelet[2853]: E1105 04:54:32.353786 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.353999 kubelet[2853]: E1105 04:54:32.353984 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.353999 kubelet[2853]: W1105 04:54:32.353995 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.354050 kubelet[2853]: E1105 04:54:32.354003 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.354191 kubelet[2853]: E1105 04:54:32.354176 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.354191 kubelet[2853]: W1105 04:54:32.354187 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.354234 kubelet[2853]: E1105 04:54:32.354195 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.354387 kubelet[2853]: E1105 04:54:32.354370 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.354387 kubelet[2853]: W1105 04:54:32.354381 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.354441 kubelet[2853]: E1105 04:54:32.354390 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.354590 kubelet[2853]: E1105 04:54:32.354573 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.354590 kubelet[2853]: W1105 04:54:32.354584 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.354639 kubelet[2853]: E1105 04:54:32.354592 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.354796 kubelet[2853]: E1105 04:54:32.354780 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.354796 kubelet[2853]: W1105 04:54:32.354791 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.354846 kubelet[2853]: E1105 04:54:32.354799 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.355004 kubelet[2853]: E1105 04:54:32.354987 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.355004 kubelet[2853]: W1105 04:54:32.354999 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.355058 kubelet[2853]: E1105 04:54:32.355007 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.372456 kubelet[2853]: E1105 04:54:32.372436 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.372456 kubelet[2853]: W1105 04:54:32.372449 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.372456 kubelet[2853]: E1105 04:54:32.372460 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.372768 kubelet[2853]: E1105 04:54:32.372722 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.372768 kubelet[2853]: W1105 04:54:32.372738 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.372768 kubelet[2853]: E1105 04:54:32.372746 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.373119 kubelet[2853]: E1105 04:54:32.372973 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.373119 kubelet[2853]: W1105 04:54:32.372981 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.373119 kubelet[2853]: E1105 04:54:32.372991 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.373419 kubelet[2853]: E1105 04:54:32.373372 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.373419 kubelet[2853]: W1105 04:54:32.373405 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.373474 kubelet[2853]: E1105 04:54:32.373431 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.373666 kubelet[2853]: E1105 04:54:32.373639 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.373666 kubelet[2853]: W1105 04:54:32.373650 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.373666 kubelet[2853]: E1105 04:54:32.373658 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.373872 kubelet[2853]: E1105 04:54:32.373836 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.373872 kubelet[2853]: W1105 04:54:32.373848 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.373932 kubelet[2853]: E1105 04:54:32.373878 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.374116 kubelet[2853]: E1105 04:54:32.374090 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.374116 kubelet[2853]: W1105 04:54:32.374101 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.374116 kubelet[2853]: E1105 04:54:32.374110 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.374304 kubelet[2853]: E1105 04:54:32.374286 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.374304 kubelet[2853]: W1105 04:54:32.374297 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.374304 kubelet[2853]: E1105 04:54:32.374305 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.374509 kubelet[2853]: E1105 04:54:32.374490 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.374509 kubelet[2853]: W1105 04:54:32.374501 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.374509 kubelet[2853]: E1105 04:54:32.374509 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.374701 kubelet[2853]: E1105 04:54:32.374683 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.374701 kubelet[2853]: W1105 04:54:32.374693 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.374701 kubelet[2853]: E1105 04:54:32.374701 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.374907 kubelet[2853]: E1105 04:54:32.374889 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.374907 kubelet[2853]: W1105 04:54:32.374900 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.374962 kubelet[2853]: E1105 04:54:32.374908 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.375118 kubelet[2853]: E1105 04:54:32.375101 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.375118 kubelet[2853]: W1105 04:54:32.375110 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.375188 kubelet[2853]: E1105 04:54:32.375120 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.375460 kubelet[2853]: E1105 04:54:32.375422 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.375460 kubelet[2853]: W1105 04:54:32.375443 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.375460 kubelet[2853]: E1105 04:54:32.375454 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.375677 kubelet[2853]: E1105 04:54:32.375649 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.375677 kubelet[2853]: W1105 04:54:32.375661 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.375677 kubelet[2853]: E1105 04:54:32.375669 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.375866 kubelet[2853]: E1105 04:54:32.375840 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.375866 kubelet[2853]: W1105 04:54:32.375851 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.375925 kubelet[2853]: E1105 04:54:32.375875 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.376122 kubelet[2853]: E1105 04:54:32.376103 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.376122 kubelet[2853]: W1105 04:54:32.376115 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.376122 kubelet[2853]: E1105 04:54:32.376123 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.376418 kubelet[2853]: E1105 04:54:32.376388 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.376418 kubelet[2853]: W1105 04:54:32.376403 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.376418 kubelet[2853]: E1105 04:54:32.376414 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:32.376636 kubelet[2853]: E1105 04:54:32.376617 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:32.376636 kubelet[2853]: W1105 04:54:32.376627 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:32.376636 kubelet[2853]: E1105 04:54:32.376636 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.224887 kubelet[2853]: E1105 04:54:33.224674 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chq6c" podUID="2c81581c-25f7-472a-9631-e6c9dfccb268" Nov 5 04:54:33.262801 containerd[1646]: time="2025-11-05T04:54:33.262730166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:54:33.263818 containerd[1646]: time="2025-11-05T04:54:33.263760874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Nov 5 04:54:33.265148 containerd[1646]: time="2025-11-05T04:54:33.265091137Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:54:33.267087 containerd[1646]: time="2025-11-05T04:54:33.267044070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:54:33.267556 containerd[1646]: time="2025-11-05T04:54:33.267501972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.236170141s" Nov 5 04:54:33.267600 containerd[1646]: time="2025-11-05T04:54:33.267555332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 04:54:33.272146 containerd[1646]: time="2025-11-05T04:54:33.271270430Z" level=info msg="CreateContainer within sandbox \"e18853584fb5e68aa39639614fab8b7b40b68ce08e88592d98d3fc6a82f00773\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 04:54:33.282342 containerd[1646]: time="2025-11-05T04:54:33.282260601Z" level=info msg="Container 953b07c832e889c698836ba5da5cc8a4eca33292a6b041071f4186e5271f4b26: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:54:33.291851 containerd[1646]: time="2025-11-05T04:54:33.291804984Z" level=info msg="CreateContainer within sandbox \"e18853584fb5e68aa39639614fab8b7b40b68ce08e88592d98d3fc6a82f00773\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"953b07c832e889c698836ba5da5cc8a4eca33292a6b041071f4186e5271f4b26\"" Nov 5 04:54:33.292154 containerd[1646]: time="2025-11-05T04:54:33.292132419Z" level=info msg="StartContainer for \"953b07c832e889c698836ba5da5cc8a4eca33292a6b041071f4186e5271f4b26\"" Nov 5 04:54:33.293469 containerd[1646]: time="2025-11-05T04:54:33.293446030Z" level=info msg="connecting to shim 953b07c832e889c698836ba5da5cc8a4eca33292a6b041071f4186e5271f4b26" address="unix:///run/containerd/s/e4e18d618d514c109297a3b76a9ba9746485a1636a3a1edd7a8de15ae937fc2d" protocol=ttrpc version=3 Nov 5 04:54:33.293576 kubelet[2853]: I1105 04:54:33.293516 2853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 04:54:33.294035 kubelet[2853]: E1105 04:54:33.293854 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:33.315117 systemd[1]: Started cri-containerd-953b07c832e889c698836ba5da5cc8a4eca33292a6b041071f4186e5271f4b26.scope - libcontainer container 953b07c832e889c698836ba5da5cc8a4eca33292a6b041071f4186e5271f4b26. Nov 5 04:54:33.363033 kubelet[2853]: E1105 04:54:33.362971 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:33.363583 kubelet[2853]: W1105 04:54:33.363104 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:33.363583 kubelet[2853]: E1105 04:54:33.363138 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.364224 kubelet[2853]: E1105 04:54:33.364118 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:33.364224 kubelet[2853]: W1105 04:54:33.364130 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:33.364224 kubelet[2853]: E1105 04:54:33.364139 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.364622 kubelet[2853]: E1105 04:54:33.364572 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:33.364622 kubelet[2853]: W1105 04:54:33.364584 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:33.364887 kubelet[2853]: E1105 04:54:33.364595 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.365242 kubelet[2853]: E1105 04:54:33.365170 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:33.365242 kubelet[2853]: W1105 04:54:33.365181 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:33.365242 kubelet[2853]: E1105 04:54:33.365190 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.365795 kubelet[2853]: E1105 04:54:33.365782 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:33.365957 kubelet[2853]: W1105 04:54:33.365850 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:33.365957 kubelet[2853]: E1105 04:54:33.365874 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.366457 kubelet[2853]: E1105 04:54:33.366384 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:33.366457 kubelet[2853]: W1105 04:54:33.366397 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:33.366610 kubelet[2853]: E1105 04:54:33.366411 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.366962 kubelet[2853]: E1105 04:54:33.366929 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:33.367383 kubelet[2853]: W1105 04:54:33.367051 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:33.367383 kubelet[2853]: E1105 04:54:33.367066 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.367689 containerd[1646]: time="2025-11-05T04:54:33.367641405Z" level=info msg="StartContainer for \"953b07c832e889c698836ba5da5cc8a4eca33292a6b041071f4186e5271f4b26\" returns successfully" Nov 5 04:54:33.368086 kubelet[2853]: E1105 04:54:33.367941 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:33.368086 kubelet[2853]: W1105 04:54:33.367953 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:33.368086 kubelet[2853]: E1105 04:54:33.367965 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.368635 kubelet[2853]: E1105 04:54:33.368528 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:33.368635 kubelet[2853]: W1105 04:54:33.368541 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:33.368635 kubelet[2853]: E1105 04:54:33.368551 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.369055 kubelet[2853]: E1105 04:54:33.368977 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:33.369055 kubelet[2853]: W1105 04:54:33.368989 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:33.369055 kubelet[2853]: E1105 04:54:33.368999 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.369505 kubelet[2853]: E1105 04:54:33.369399 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:33.369505 kubelet[2853]: W1105 04:54:33.369428 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:33.369505 kubelet[2853]: E1105 04:54:33.369438 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.369835 kubelet[2853]: E1105 04:54:33.369804 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:33.370013 kubelet[2853]: W1105 04:54:33.369906 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:33.370013 kubelet[2853]: E1105 04:54:33.369918 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.370517 kubelet[2853]: E1105 04:54:33.370492 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:33.370676 kubelet[2853]: W1105 04:54:33.370644 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:33.370771 kubelet[2853]: E1105 04:54:33.370734 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.371400 kubelet[2853]: E1105 04:54:33.371256 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:33.371400 kubelet[2853]: W1105 04:54:33.371267 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:33.371400 kubelet[2853]: E1105 04:54:33.371278 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.371706 kubelet[2853]: E1105 04:54:33.371633 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:54:33.371706 kubelet[2853]: W1105 04:54:33.371644 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:54:33.371706 kubelet[2853]: E1105 04:54:33.371653 2853 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:54:33.378528 systemd[1]: cri-containerd-953b07c832e889c698836ba5da5cc8a4eca33292a6b041071f4186e5271f4b26.scope: Deactivated successfully. Nov 5 04:54:33.380826 containerd[1646]: time="2025-11-05T04:54:33.380763838Z" level=info msg="received exit event container_id:\"953b07c832e889c698836ba5da5cc8a4eca33292a6b041071f4186e5271f4b26\" id:\"953b07c832e889c698836ba5da5cc8a4eca33292a6b041071f4186e5271f4b26\" pid:3567 exited_at:{seconds:1762318473 nanos:380126198}" Nov 5 04:54:33.407338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-953b07c832e889c698836ba5da5cc8a4eca33292a6b041071f4186e5271f4b26-rootfs.mount: Deactivated successfully. Nov 5 04:54:34.297269 kubelet[2853]: E1105 04:54:34.297222 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:34.708597 kubelet[2853]: I1105 04:54:34.708318 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7897845f64-xk5xh" podStartSLOduration=3.712891038 podStartE2EDuration="6.708263508s" podCreationTimestamp="2025-11-05 04:54:28 +0000 UTC" firstStartedPulling="2025-11-05 04:54:29.034928972 +0000 UTC m=+19.902093519" lastFinishedPulling="2025-11-05 04:54:32.030301442 +0000 UTC m=+22.897465989" observedRunningTime="2025-11-05 04:54:32.313069044 +0000 UTC m=+23.180233591" watchObservedRunningTime="2025-11-05 04:54:34.708263508 +0000 UTC m=+25.575428055" Nov 5 04:54:35.224437 kubelet[2853]: E1105 04:54:35.224354 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chq6c" podUID="2c81581c-25f7-472a-9631-e6c9dfccb268" Nov 5 04:54:35.300648 kubelet[2853]: E1105 04:54:35.300604 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:35.301315 containerd[1646]: time="2025-11-05T04:54:35.301252666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 04:54:37.223675 kubelet[2853]: E1105 04:54:37.223603 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chq6c" podUID="2c81581c-25f7-472a-9631-e6c9dfccb268" Nov 5 04:54:38.612967 containerd[1646]: time="2025-11-05T04:54:38.612906585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:54:38.613646 containerd[1646]: time="2025-11-05T04:54:38.613585411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 5 04:54:38.614818 containerd[1646]: time="2025-11-05T04:54:38.614770508Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:54:38.616812 containerd[1646]: time="2025-11-05T04:54:38.616765597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:54:38.617496 containerd[1646]: time="2025-11-05T04:54:38.617439353Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.316115473s" Nov 5 04:54:38.617496 containerd[1646]: time="2025-11-05T04:54:38.617477414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 04:54:38.621477 containerd[1646]: time="2025-11-05T04:54:38.621428299Z" level=info msg="CreateContainer within sandbox \"e18853584fb5e68aa39639614fab8b7b40b68ce08e88592d98d3fc6a82f00773\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 04:54:38.631348 containerd[1646]: time="2025-11-05T04:54:38.631277969Z" level=info msg="Container 13ce18a588833f674a5e730a65bf545904aa3ebb93c4bf47f9abef5c9a162768: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:54:38.640572 containerd[1646]: time="2025-11-05T04:54:38.640522974Z" level=info msg="CreateContainer within sandbox \"e18853584fb5e68aa39639614fab8b7b40b68ce08e88592d98d3fc6a82f00773\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"13ce18a588833f674a5e730a65bf545904aa3ebb93c4bf47f9abef5c9a162768\"" Nov 5 04:54:38.641313 containerd[1646]: time="2025-11-05T04:54:38.641265268Z" level=info msg="StartContainer for \"13ce18a588833f674a5e730a65bf545904aa3ebb93c4bf47f9abef5c9a162768\"" Nov 5 04:54:38.643425 containerd[1646]: time="2025-11-05T04:54:38.643391584Z" level=info msg="connecting to shim 13ce18a588833f674a5e730a65bf545904aa3ebb93c4bf47f9abef5c9a162768" address="unix:///run/containerd/s/e4e18d618d514c109297a3b76a9ba9746485a1636a3a1edd7a8de15ae937fc2d" protocol=ttrpc version=3 Nov 5 04:54:38.669068 systemd[1]: Started cri-containerd-13ce18a588833f674a5e730a65bf545904aa3ebb93c4bf47f9abef5c9a162768.scope - libcontainer container 13ce18a588833f674a5e730a65bf545904aa3ebb93c4bf47f9abef5c9a162768. Nov 5 04:54:38.719388 containerd[1646]: time="2025-11-05T04:54:38.719328852Z" level=info msg="StartContainer for \"13ce18a588833f674a5e730a65bf545904aa3ebb93c4bf47f9abef5c9a162768\" returns successfully" Nov 5 04:54:39.224083 kubelet[2853]: E1105 04:54:39.224003 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chq6c" podUID="2c81581c-25f7-472a-9631-e6c9dfccb268" Nov 5 04:54:39.310501 kubelet[2853]: E1105 04:54:39.310470 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:40.312010 kubelet[2853]: E1105 04:54:40.311969 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:41.224441 kubelet[2853]: E1105 04:54:41.224349 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chq6c" podUID="2c81581c-25f7-472a-9631-e6c9dfccb268" Nov 5 04:54:41.817813 systemd[1]: cri-containerd-13ce18a588833f674a5e730a65bf545904aa3ebb93c4bf47f9abef5c9a162768.scope: Deactivated successfully. Nov 5 04:54:41.818268 systemd[1]: cri-containerd-13ce18a588833f674a5e730a65bf545904aa3ebb93c4bf47f9abef5c9a162768.scope: Consumed 654ms CPU time, 182M memory peak, 2.8M read from disk, 171.3M written to disk. Nov 5 04:54:41.866011 containerd[1646]: time="2025-11-05T04:54:41.865962118Z" level=info msg="received exit event container_id:\"13ce18a588833f674a5e730a65bf545904aa3ebb93c4bf47f9abef5c9a162768\" id:\"13ce18a588833f674a5e730a65bf545904aa3ebb93c4bf47f9abef5c9a162768\" pid:3642 exited_at:{seconds:1762318481 nanos:820845475}" Nov 5 04:54:41.896583 kubelet[2853]: I1105 04:54:41.896546 2853 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 04:54:41.904647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13ce18a588833f674a5e730a65bf545904aa3ebb93c4bf47f9abef5c9a162768-rootfs.mount: Deactivated successfully. Nov 5 04:54:43.556298 systemd[1]: Created slice kubepods-burstable-pod464b477f_76ea_4367_9f43_e0e7912c23ef.slice - libcontainer container kubepods-burstable-pod464b477f_76ea_4367_9f43_e0e7912c23ef.slice. Nov 5 04:54:43.563054 systemd[1]: Created slice kubepods-besteffort-pod2c81581c_25f7_472a_9631_e6c9dfccb268.slice - libcontainer container kubepods-besteffort-pod2c81581c_25f7_472a_9631_e6c9dfccb268.slice. Nov 5 04:54:43.565714 containerd[1646]: time="2025-11-05T04:54:43.565658505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chq6c,Uid:2c81581c-25f7-472a-9631-e6c9dfccb268,Namespace:calico-system,Attempt:0,}" Nov 5 04:54:43.647489 kubelet[2853]: I1105 04:54:43.647430 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjnsd\" (UniqueName: \"kubernetes.io/projected/464b477f-76ea-4367-9f43-e0e7912c23ef-kube-api-access-zjnsd\") pod \"coredns-674b8bbfcf-qh59c\" (UID: \"464b477f-76ea-4367-9f43-e0e7912c23ef\") " pod="kube-system/coredns-674b8bbfcf-qh59c" Nov 5 04:54:43.647489 kubelet[2853]: I1105 04:54:43.647490 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/464b477f-76ea-4367-9f43-e0e7912c23ef-config-volume\") pod \"coredns-674b8bbfcf-qh59c\" (UID: \"464b477f-76ea-4367-9f43-e0e7912c23ef\") " pod="kube-system/coredns-674b8bbfcf-qh59c" Nov 5 04:54:43.673894 systemd[1]: Created slice kubepods-besteffort-pod8530303d_8858_42cd_8a4b_17efdc4905d8.slice - libcontainer container kubepods-besteffort-pod8530303d_8858_42cd_8a4b_17efdc4905d8.slice. Nov 5 04:54:43.748767 kubelet[2853]: I1105 04:54:43.748699 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8530303d-8858-42cd-8a4b-17efdc4905d8-whisker-backend-key-pair\") pod \"whisker-79b458d856-dzv2d\" (UID: \"8530303d-8858-42cd-8a4b-17efdc4905d8\") " pod="calico-system/whisker-79b458d856-dzv2d" Nov 5 04:54:43.748767 kubelet[2853]: I1105 04:54:43.748751 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjn4p\" (UniqueName: \"kubernetes.io/projected/8530303d-8858-42cd-8a4b-17efdc4905d8-kube-api-access-xjn4p\") pod \"whisker-79b458d856-dzv2d\" (UID: \"8530303d-8858-42cd-8a4b-17efdc4905d8\") " pod="calico-system/whisker-79b458d856-dzv2d" Nov 5 04:54:43.748984 kubelet[2853]: I1105 04:54:43.748801 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8530303d-8858-42cd-8a4b-17efdc4905d8-whisker-ca-bundle\") pod \"whisker-79b458d856-dzv2d\" (UID: \"8530303d-8858-42cd-8a4b-17efdc4905d8\") " pod="calico-system/whisker-79b458d856-dzv2d" Nov 5 04:54:43.785171 systemd[1]: Created slice kubepods-besteffort-podbde487fb_2c18_4dab_a763_3054070918ea.slice - libcontainer container kubepods-besteffort-podbde487fb_2c18_4dab_a763_3054070918ea.slice. Nov 5 04:54:43.815043 systemd[1]: Created slice kubepods-besteffort-pod98e18d15_122f_4a59_81ce_7fb003c6fe97.slice - libcontainer container kubepods-besteffort-pod98e18d15_122f_4a59_81ce_7fb003c6fe97.slice. Nov 5 04:54:43.826679 systemd[1]: Created slice kubepods-besteffort-pod963b2307_a381_4c52_97eb_b8c873c4eef3.slice - libcontainer container kubepods-besteffort-pod963b2307_a381_4c52_97eb_b8c873c4eef3.slice. Nov 5 04:54:43.849344 kubelet[2853]: I1105 04:54:43.849277 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h4xs\" (UniqueName: \"kubernetes.io/projected/bde487fb-2c18-4dab-a763-3054070918ea-kube-api-access-8h4xs\") pod \"calico-apiserver-68789659cc-mdcdc\" (UID: \"bde487fb-2c18-4dab-a763-3054070918ea\") " pod="calico-apiserver/calico-apiserver-68789659cc-mdcdc" Nov 5 04:54:43.849344 kubelet[2853]: I1105 04:54:43.849325 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/963b2307-a381-4c52-97eb-b8c873c4eef3-calico-apiserver-certs\") pod \"calico-apiserver-68789659cc-swcqh\" (UID: \"963b2307-a381-4c52-97eb-b8c873c4eef3\") " pod="calico-apiserver/calico-apiserver-68789659cc-swcqh" Nov 5 04:54:43.849344 kubelet[2853]: I1105 04:54:43.849340 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bde487fb-2c18-4dab-a763-3054070918ea-calico-apiserver-certs\") pod \"calico-apiserver-68789659cc-mdcdc\" (UID: \"bde487fb-2c18-4dab-a763-3054070918ea\") " pod="calico-apiserver/calico-apiserver-68789659cc-mdcdc" Nov 5 04:54:43.849344 kubelet[2853]: I1105 04:54:43.849356 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98e18d15-122f-4a59-81ce-7fb003c6fe97-tigera-ca-bundle\") pod \"calico-kube-controllers-54bb96ccb8-sppjg\" (UID: \"98e18d15-122f-4a59-81ce-7fb003c6fe97\") " pod="calico-system/calico-kube-controllers-54bb96ccb8-sppjg" Nov 5 04:54:43.849701 kubelet[2853]: I1105 04:54:43.849373 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6dxw\" (UniqueName: \"kubernetes.io/projected/98e18d15-122f-4a59-81ce-7fb003c6fe97-kube-api-access-b6dxw\") pod \"calico-kube-controllers-54bb96ccb8-sppjg\" (UID: \"98e18d15-122f-4a59-81ce-7fb003c6fe97\") " pod="calico-system/calico-kube-controllers-54bb96ccb8-sppjg" Nov 5 04:54:43.849701 kubelet[2853]: I1105 04:54:43.849390 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6v75\" (UniqueName: \"kubernetes.io/projected/963b2307-a381-4c52-97eb-b8c873c4eef3-kube-api-access-t6v75\") pod \"calico-apiserver-68789659cc-swcqh\" (UID: \"963b2307-a381-4c52-97eb-b8c873c4eef3\") " pod="calico-apiserver/calico-apiserver-68789659cc-swcqh" Nov 5 04:54:43.864109 kubelet[2853]: E1105 04:54:43.864049 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:43.864908 containerd[1646]: time="2025-11-05T04:54:43.864743096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qh59c,Uid:464b477f-76ea-4367-9f43-e0e7912c23ef,Namespace:kube-system,Attempt:0,}" Nov 5 04:54:43.871161 systemd[1]: Created slice kubepods-besteffort-podbc7bc11f_b4fe_4b49_97ff_ff9c5cb4fe1c.slice - libcontainer container kubepods-besteffort-podbc7bc11f_b4fe_4b49_97ff_ff9c5cb4fe1c.slice. Nov 5 04:54:43.876319 systemd[1]: Created slice kubepods-burstable-pod01310699_6728_4107_ba1d_e6a505bd0d5a.slice - libcontainer container kubepods-burstable-pod01310699_6728_4107_ba1d_e6a505bd0d5a.slice. Nov 5 04:54:43.929169 containerd[1646]: time="2025-11-05T04:54:43.926080063Z" level=error msg="Failed to destroy network for sandbox \"3632e1ed93ae18630dfdc5f3a2536a37128bcb3bf3ec370941061189175c1d41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:43.942084 containerd[1646]: time="2025-11-05T04:54:43.942002406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chq6c,Uid:2c81581c-25f7-472a-9631-e6c9dfccb268,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3632e1ed93ae18630dfdc5f3a2536a37128bcb3bf3ec370941061189175c1d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:43.942440 kubelet[2853]: E1105 04:54:43.942254 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3632e1ed93ae18630dfdc5f3a2536a37128bcb3bf3ec370941061189175c1d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:43.942440 kubelet[2853]: E1105 04:54:43.942318 2853 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3632e1ed93ae18630dfdc5f3a2536a37128bcb3bf3ec370941061189175c1d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-chq6c" Nov 5 04:54:43.942440 kubelet[2853]: E1105 04:54:43.942344 2853 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3632e1ed93ae18630dfdc5f3a2536a37128bcb3bf3ec370941061189175c1d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-chq6c" Nov 5 04:54:43.942628 kubelet[2853]: E1105 04:54:43.942410 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-chq6c_calico-system(2c81581c-25f7-472a-9631-e6c9dfccb268)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-chq6c_calico-system(2c81581c-25f7-472a-9631-e6c9dfccb268)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3632e1ed93ae18630dfdc5f3a2536a37128bcb3bf3ec370941061189175c1d41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-chq6c" podUID="2c81581c-25f7-472a-9631-e6c9dfccb268" Nov 5 04:54:43.950414 kubelet[2853]: I1105 04:54:43.950359 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01310699-6728-4107-ba1d-e6a505bd0d5a-config-volume\") pod \"coredns-674b8bbfcf-hxbsg\" (UID: \"01310699-6728-4107-ba1d-e6a505bd0d5a\") " pod="kube-system/coredns-674b8bbfcf-hxbsg" Nov 5 04:54:43.950414 kubelet[2853]: I1105 04:54:43.950408 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c-config\") pod \"goldmane-666569f655-vtqjl\" (UID: \"bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c\") " pod="calico-system/goldmane-666569f655-vtqjl" Nov 5 04:54:43.950414 kubelet[2853]: I1105 04:54:43.950423 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c-goldmane-ca-bundle\") pod \"goldmane-666569f655-vtqjl\" (UID: \"bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c\") " pod="calico-system/goldmane-666569f655-vtqjl" Nov 5 04:54:43.951122 kubelet[2853]: I1105 04:54:43.950440 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnjf9\" (UniqueName: \"kubernetes.io/projected/bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c-kube-api-access-nnjf9\") pod \"goldmane-666569f655-vtqjl\" (UID: \"bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c\") " pod="calico-system/goldmane-666569f655-vtqjl" Nov 5 04:54:43.951122 kubelet[2853]: I1105 04:54:43.950465 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5fbc\" (UniqueName: \"kubernetes.io/projected/01310699-6728-4107-ba1d-e6a505bd0d5a-kube-api-access-d5fbc\") pod \"coredns-674b8bbfcf-hxbsg\" (UID: \"01310699-6728-4107-ba1d-e6a505bd0d5a\") " pod="kube-system/coredns-674b8bbfcf-hxbsg" Nov 5 04:54:43.951412 kubelet[2853]: I1105 04:54:43.951391 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c-goldmane-key-pair\") pod \"goldmane-666569f655-vtqjl\" (UID: \"bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c\") " pod="calico-system/goldmane-666569f655-vtqjl" Nov 5 04:54:43.977520 containerd[1646]: time="2025-11-05T04:54:43.977474065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b458d856-dzv2d,Uid:8530303d-8858-42cd-8a4b-17efdc4905d8,Namespace:calico-system,Attempt:0,}" Nov 5 04:54:43.993963 containerd[1646]: time="2025-11-05T04:54:43.993894004Z" level=error msg="Failed to destroy network for sandbox \"e96786955b44423edd18f087375fb7116e09c80b54931eb22fcaf79d26e171a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:43.996079 containerd[1646]: time="2025-11-05T04:54:43.996028653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qh59c,Uid:464b477f-76ea-4367-9f43-e0e7912c23ef,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e96786955b44423edd18f087375fb7116e09c80b54931eb22fcaf79d26e171a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:43.998044 kubelet[2853]: E1105 04:54:43.997987 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e96786955b44423edd18f087375fb7116e09c80b54931eb22fcaf79d26e171a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:43.998144 kubelet[2853]: E1105 04:54:43.998090 2853 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e96786955b44423edd18f087375fb7116e09c80b54931eb22fcaf79d26e171a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qh59c" Nov 5 04:54:43.998144 kubelet[2853]: E1105 04:54:43.998120 2853 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e96786955b44423edd18f087375fb7116e09c80b54931eb22fcaf79d26e171a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qh59c" Nov 5 04:54:43.998241 kubelet[2853]: E1105 04:54:43.998193 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qh59c_kube-system(464b477f-76ea-4367-9f43-e0e7912c23ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qh59c_kube-system(464b477f-76ea-4367-9f43-e0e7912c23ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e96786955b44423edd18f087375fb7116e09c80b54931eb22fcaf79d26e171a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qh59c" podUID="464b477f-76ea-4367-9f43-e0e7912c23ef" Nov 5 04:54:44.030323 containerd[1646]: time="2025-11-05T04:54:44.030256571Z" level=error msg="Failed to destroy network for sandbox \"1cd9bf72a1660d2bf626044f3447c382b5c5e4934e40f9f5ed39235f381ecb3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.068292 containerd[1646]: time="2025-11-05T04:54:44.068155822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b458d856-dzv2d,Uid:8530303d-8858-42cd-8a4b-17efdc4905d8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cd9bf72a1660d2bf626044f3447c382b5c5e4934e40f9f5ed39235f381ecb3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.068542 kubelet[2853]: E1105 04:54:44.068506 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cd9bf72a1660d2bf626044f3447c382b5c5e4934e40f9f5ed39235f381ecb3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.068780 kubelet[2853]: E1105 04:54:44.068655 2853 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cd9bf72a1660d2bf626044f3447c382b5c5e4934e40f9f5ed39235f381ecb3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79b458d856-dzv2d" Nov 5 04:54:44.068780 kubelet[2853]: E1105 04:54:44.068682 2853 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cd9bf72a1660d2bf626044f3447c382b5c5e4934e40f9f5ed39235f381ecb3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79b458d856-dzv2d" Nov 5 04:54:44.068780 kubelet[2853]: E1105 04:54:44.068737 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79b458d856-dzv2d_calico-system(8530303d-8858-42cd-8a4b-17efdc4905d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79b458d856-dzv2d_calico-system(8530303d-8858-42cd-8a4b-17efdc4905d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cd9bf72a1660d2bf626044f3447c382b5c5e4934e40f9f5ed39235f381ecb3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79b458d856-dzv2d" podUID="8530303d-8858-42cd-8a4b-17efdc4905d8" Nov 5 04:54:44.088047 containerd[1646]: time="2025-11-05T04:54:44.087984579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68789659cc-mdcdc,Uid:bde487fb-2c18-4dab-a763-3054070918ea,Namespace:calico-apiserver,Attempt:0,}" Nov 5 04:54:44.119887 containerd[1646]: time="2025-11-05T04:54:44.119826425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54bb96ccb8-sppjg,Uid:98e18d15-122f-4a59-81ce-7fb003c6fe97,Namespace:calico-system,Attempt:0,}" Nov 5 04:54:44.130490 containerd[1646]: time="2025-11-05T04:54:44.130442571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68789659cc-swcqh,Uid:963b2307-a381-4c52-97eb-b8c873c4eef3,Namespace:calico-apiserver,Attempt:0,}" Nov 5 04:54:44.147376 containerd[1646]: time="2025-11-05T04:54:44.147317421Z" level=error msg="Failed to destroy network for sandbox \"669c0037afaac7df7f61a867e39c5289a77dc3afce28d3c112b777f761c45c75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.175015 containerd[1646]: time="2025-11-05T04:54:44.174963397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vtqjl,Uid:bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c,Namespace:calico-system,Attempt:0,}" Nov 5 04:54:44.183670 kubelet[2853]: E1105 04:54:44.183628 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:44.184345 containerd[1646]: time="2025-11-05T04:54:44.184297697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hxbsg,Uid:01310699-6728-4107-ba1d-e6a505bd0d5a,Namespace:kube-system,Attempt:0,}" Nov 5 04:54:44.322783 kubelet[2853]: E1105 04:54:44.322425 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:44.323449 containerd[1646]: time="2025-11-05T04:54:44.323344804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 04:54:44.342548 containerd[1646]: time="2025-11-05T04:54:44.342168473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68789659cc-mdcdc,Uid:bde487fb-2c18-4dab-a763-3054070918ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"669c0037afaac7df7f61a867e39c5289a77dc3afce28d3c112b777f761c45c75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.342794 kubelet[2853]: E1105 04:54:44.342747 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"669c0037afaac7df7f61a867e39c5289a77dc3afce28d3c112b777f761c45c75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.342930 kubelet[2853]: E1105 04:54:44.342808 2853 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"669c0037afaac7df7f61a867e39c5289a77dc3afce28d3c112b777f761c45c75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68789659cc-mdcdc" Nov 5 04:54:44.342930 kubelet[2853]: E1105 04:54:44.342836 2853 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"669c0037afaac7df7f61a867e39c5289a77dc3afce28d3c112b777f761c45c75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68789659cc-mdcdc" Nov 5 04:54:44.343206 kubelet[2853]: E1105 04:54:44.342917 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68789659cc-mdcdc_calico-apiserver(bde487fb-2c18-4dab-a763-3054070918ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68789659cc-mdcdc_calico-apiserver(bde487fb-2c18-4dab-a763-3054070918ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"669c0037afaac7df7f61a867e39c5289a77dc3afce28d3c112b777f761c45c75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68789659cc-mdcdc" podUID="bde487fb-2c18-4dab-a763-3054070918ea" Nov 5 04:54:44.398968 containerd[1646]: time="2025-11-05T04:54:44.398880593Z" level=error msg="Failed to destroy network for sandbox \"b2624d03cdd68babd915da30a3a1fe690172f878e36bfe5245999bab1460bf3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.537894 containerd[1646]: time="2025-11-05T04:54:44.537767910Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54bb96ccb8-sppjg,Uid:98e18d15-122f-4a59-81ce-7fb003c6fe97,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2624d03cdd68babd915da30a3a1fe690172f878e36bfe5245999bab1460bf3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.538190 kubelet[2853]: E1105 04:54:44.538102 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2624d03cdd68babd915da30a3a1fe690172f878e36bfe5245999bab1460bf3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.538190 kubelet[2853]: E1105 04:54:44.538179 2853 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2624d03cdd68babd915da30a3a1fe690172f878e36bfe5245999bab1460bf3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54bb96ccb8-sppjg" Nov 5 04:54:44.538301 kubelet[2853]: E1105 04:54:44.538212 2853 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2624d03cdd68babd915da30a3a1fe690172f878e36bfe5245999bab1460bf3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54bb96ccb8-sppjg" Nov 5 04:54:44.538301 kubelet[2853]: E1105 04:54:44.538272 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54bb96ccb8-sppjg_calico-system(98e18d15-122f-4a59-81ce-7fb003c6fe97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54bb96ccb8-sppjg_calico-system(98e18d15-122f-4a59-81ce-7fb003c6fe97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2624d03cdd68babd915da30a3a1fe690172f878e36bfe5245999bab1460bf3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54bb96ccb8-sppjg" podUID="98e18d15-122f-4a59-81ce-7fb003c6fe97" Nov 5 04:54:44.588158 containerd[1646]: time="2025-11-05T04:54:44.587935521Z" level=error msg="Failed to destroy network for sandbox \"ace73b89b4c788dea4976644d1c634d0c3ea46d8f6cca4b2329a4a7dae922a25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.592895 containerd[1646]: time="2025-11-05T04:54:44.592810255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68789659cc-swcqh,Uid:963b2307-a381-4c52-97eb-b8c873c4eef3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace73b89b4c788dea4976644d1c634d0c3ea46d8f6cca4b2329a4a7dae922a25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.593663 kubelet[2853]: E1105 04:54:44.593115 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace73b89b4c788dea4976644d1c634d0c3ea46d8f6cca4b2329a4a7dae922a25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.593663 kubelet[2853]: E1105 04:54:44.593178 2853 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace73b89b4c788dea4976644d1c634d0c3ea46d8f6cca4b2329a4a7dae922a25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68789659cc-swcqh" Nov 5 04:54:44.593663 kubelet[2853]: E1105 04:54:44.593207 2853 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ace73b89b4c788dea4976644d1c634d0c3ea46d8f6cca4b2329a4a7dae922a25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68789659cc-swcqh" Nov 5 04:54:44.593775 kubelet[2853]: E1105 04:54:44.593264 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68789659cc-swcqh_calico-apiserver(963b2307-a381-4c52-97eb-b8c873c4eef3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68789659cc-swcqh_calico-apiserver(963b2307-a381-4c52-97eb-b8c873c4eef3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ace73b89b4c788dea4976644d1c634d0c3ea46d8f6cca4b2329a4a7dae922a25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68789659cc-swcqh" podUID="963b2307-a381-4c52-97eb-b8c873c4eef3" Nov 5 04:54:44.597877 containerd[1646]: time="2025-11-05T04:54:44.597800887Z" level=error msg="Failed to destroy network for sandbox \"8805a81c2848d461499ea524068d6980cef67e69ab11d514569a9e035cc3f413\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.597996 containerd[1646]: time="2025-11-05T04:54:44.597965817Z" level=error msg="Failed to destroy network for sandbox \"9ba6cf889004d2bfc46af740af35b912743a7f8318c6e4f27db7a211493db9c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.600076 containerd[1646]: time="2025-11-05T04:54:44.600025505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vtqjl,Uid:bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8805a81c2848d461499ea524068d6980cef67e69ab11d514569a9e035cc3f413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.600260 kubelet[2853]: E1105 04:54:44.600222 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8805a81c2848d461499ea524068d6980cef67e69ab11d514569a9e035cc3f413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.600309 kubelet[2853]: E1105 04:54:44.600277 2853 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8805a81c2848d461499ea524068d6980cef67e69ab11d514569a9e035cc3f413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vtqjl" Nov 5 04:54:44.600309 kubelet[2853]: E1105 04:54:44.600301 2853 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8805a81c2848d461499ea524068d6980cef67e69ab11d514569a9e035cc3f413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vtqjl" Nov 5 04:54:44.600394 kubelet[2853]: E1105 04:54:44.600354 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vtqjl_calico-system(bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vtqjl_calico-system(bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8805a81c2848d461499ea524068d6980cef67e69ab11d514569a9e035cc3f413\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vtqjl" podUID="bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c" Nov 5 04:54:44.601914 containerd[1646]: time="2025-11-05T04:54:44.601883804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hxbsg,Uid:01310699-6728-4107-ba1d-e6a505bd0d5a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ba6cf889004d2bfc46af740af35b912743a7f8318c6e4f27db7a211493db9c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.602119 kubelet[2853]: E1105 04:54:44.602083 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ba6cf889004d2bfc46af740af35b912743a7f8318c6e4f27db7a211493db9c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:44.602162 kubelet[2853]: E1105 04:54:44.602128 2853 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ba6cf889004d2bfc46af740af35b912743a7f8318c6e4f27db7a211493db9c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hxbsg" Nov 5 04:54:44.602162 kubelet[2853]: E1105 04:54:44.602152 2853 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ba6cf889004d2bfc46af740af35b912743a7f8318c6e4f27db7a211493db9c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hxbsg" Nov 5 04:54:44.602227 kubelet[2853]: E1105 04:54:44.602196 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hxbsg_kube-system(01310699-6728-4107-ba1d-e6a505bd0d5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hxbsg_kube-system(01310699-6728-4107-ba1d-e6a505bd0d5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ba6cf889004d2bfc46af740af35b912743a7f8318c6e4f27db7a211493db9c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hxbsg" podUID="01310699-6728-4107-ba1d-e6a505bd0d5a" Nov 5 04:54:44.753517 systemd[1]: run-netns-cni\x2db5938194\x2dad21\x2d0f17\x2d966c\x2ddb68fcb2c1b7.mount: Deactivated successfully. Nov 5 04:54:44.753621 systemd[1]: run-netns-cni\x2defe24065\x2dff66\x2d5fe7\x2d149f\x2d1e322f06e663.mount: Deactivated successfully. Nov 5 04:54:51.072246 kubelet[2853]: E1105 04:54:51.072194 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:51.335052 kubelet[2853]: E1105 04:54:51.334947 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:52.510635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482042018.mount: Deactivated successfully. Nov 5 04:54:54.903173 containerd[1646]: time="2025-11-05T04:54:54.903030281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b458d856-dzv2d,Uid:8530303d-8858-42cd-8a4b-17efdc4905d8,Namespace:calico-system,Attempt:0,}" Nov 5 04:54:55.160975 containerd[1646]: time="2025-11-05T04:54:55.160152603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:54:55.161451 containerd[1646]: time="2025-11-05T04:54:55.161421845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 5 04:54:55.163115 containerd[1646]: time="2025-11-05T04:54:55.163069306Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:54:55.166871 containerd[1646]: time="2025-11-05T04:54:55.166804685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:54:55.167901 containerd[1646]: time="2025-11-05T04:54:55.167848644Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.844465748s" Nov 5 04:54:55.167901 containerd[1646]: time="2025-11-05T04:54:55.167895592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 04:54:55.183109 containerd[1646]: time="2025-11-05T04:54:55.182998432Z" level=error msg="Failed to destroy network for sandbox \"1cfbf7bed4414b71901d08a20c14dafecfd31181ba206b1e70b8c46c2ed3dead\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:55.186270 containerd[1646]: time="2025-11-05T04:54:55.186218394Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b458d856-dzv2d,Uid:8530303d-8858-42cd-8a4b-17efdc4905d8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cfbf7bed4414b71901d08a20c14dafecfd31181ba206b1e70b8c46c2ed3dead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:55.186264 systemd[1]: run-netns-cni\x2d930dc392\x2d9530\x2db069\x2deda6\x2d00fb0939579e.mount: Deactivated successfully. Nov 5 04:54:55.187050 kubelet[2853]: E1105 04:54:55.186917 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cfbf7bed4414b71901d08a20c14dafecfd31181ba206b1e70b8c46c2ed3dead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:55.187050 kubelet[2853]: E1105 04:54:55.187007 2853 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cfbf7bed4414b71901d08a20c14dafecfd31181ba206b1e70b8c46c2ed3dead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79b458d856-dzv2d" Nov 5 04:54:55.187533 kubelet[2853]: E1105 04:54:55.187145 2853 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cfbf7bed4414b71901d08a20c14dafecfd31181ba206b1e70b8c46c2ed3dead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79b458d856-dzv2d" Nov 5 04:54:55.187680 kubelet[2853]: E1105 04:54:55.187618 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79b458d856-dzv2d_calico-system(8530303d-8858-42cd-8a4b-17efdc4905d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79b458d856-dzv2d_calico-system(8530303d-8858-42cd-8a4b-17efdc4905d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cfbf7bed4414b71901d08a20c14dafecfd31181ba206b1e70b8c46c2ed3dead\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79b458d856-dzv2d" podUID="8530303d-8858-42cd-8a4b-17efdc4905d8" Nov 5 04:54:55.196037 containerd[1646]: time="2025-11-05T04:54:55.195979188Z" level=info msg="CreateContainer within sandbox \"e18853584fb5e68aa39639614fab8b7b40b68ce08e88592d98d3fc6a82f00773\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 04:54:55.215676 containerd[1646]: time="2025-11-05T04:54:55.215628350Z" level=info msg="Container 2470ff7e1e916990908e89dce77f5ea9690124d2059b3a80536897148aaf7b6b: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:54:55.225212 containerd[1646]: time="2025-11-05T04:54:55.225180663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68789659cc-swcqh,Uid:963b2307-a381-4c52-97eb-b8c873c4eef3,Namespace:calico-apiserver,Attempt:0,}" Nov 5 04:54:55.227483 containerd[1646]: time="2025-11-05T04:54:55.227419405Z" level=info msg="CreateContainer within sandbox \"e18853584fb5e68aa39639614fab8b7b40b68ce08e88592d98d3fc6a82f00773\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2470ff7e1e916990908e89dce77f5ea9690124d2059b3a80536897148aaf7b6b\"" Nov 5 04:54:55.227777 containerd[1646]: time="2025-11-05T04:54:55.227751507Z" level=info msg="StartContainer for \"2470ff7e1e916990908e89dce77f5ea9690124d2059b3a80536897148aaf7b6b\"" Nov 5 04:54:55.229288 containerd[1646]: time="2025-11-05T04:54:55.229259697Z" level=info msg="connecting to shim 2470ff7e1e916990908e89dce77f5ea9690124d2059b3a80536897148aaf7b6b" address="unix:///run/containerd/s/e4e18d618d514c109297a3b76a9ba9746485a1636a3a1edd7a8de15ae937fc2d" protocol=ttrpc version=3 Nov 5 04:54:55.257178 systemd[1]: Started cri-containerd-2470ff7e1e916990908e89dce77f5ea9690124d2059b3a80536897148aaf7b6b.scope - libcontainer container 2470ff7e1e916990908e89dce77f5ea9690124d2059b3a80536897148aaf7b6b. Nov 5 04:54:55.300101 containerd[1646]: time="2025-11-05T04:54:55.300027679Z" level=error msg="Failed to destroy network for sandbox \"ec04239ba47c545cd8bfc25e8e55b10140a644e8e827e108c42a234da41ed0f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:55.450977 containerd[1646]: time="2025-11-05T04:54:55.450065433Z" level=info msg="StartContainer for \"2470ff7e1e916990908e89dce77f5ea9690124d2059b3a80536897148aaf7b6b\" returns successfully" Nov 5 04:54:55.464152 containerd[1646]: time="2025-11-05T04:54:55.464068469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68789659cc-swcqh,Uid:963b2307-a381-4c52-97eb-b8c873c4eef3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec04239ba47c545cd8bfc25e8e55b10140a644e8e827e108c42a234da41ed0f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:55.464520 kubelet[2853]: E1105 04:54:55.464450 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec04239ba47c545cd8bfc25e8e55b10140a644e8e827e108c42a234da41ed0f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:54:55.464608 kubelet[2853]: E1105 04:54:55.464554 2853 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec04239ba47c545cd8bfc25e8e55b10140a644e8e827e108c42a234da41ed0f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68789659cc-swcqh" Nov 5 04:54:55.464669 kubelet[2853]: E1105 04:54:55.464617 2853 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec04239ba47c545cd8bfc25e8e55b10140a644e8e827e108c42a234da41ed0f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68789659cc-swcqh" Nov 5 04:54:55.464757 kubelet[2853]: E1105 04:54:55.464707 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68789659cc-swcqh_calico-apiserver(963b2307-a381-4c52-97eb-b8c873c4eef3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68789659cc-swcqh_calico-apiserver(963b2307-a381-4c52-97eb-b8c873c4eef3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec04239ba47c545cd8bfc25e8e55b10140a644e8e827e108c42a234da41ed0f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68789659cc-swcqh" podUID="963b2307-a381-4c52-97eb-b8c873c4eef3" Nov 5 04:54:55.498984 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 04:54:55.500145 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 04:54:55.898123 kubelet[2853]: E1105 04:54:55.898073 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:56.193933 kubelet[2853]: I1105 04:54:56.193715 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lc7sq" podStartSLOduration=2.180032008 podStartE2EDuration="28.193695666s" podCreationTimestamp="2025-11-05 04:54:28 +0000 UTC" firstStartedPulling="2025-11-05 04:54:29.154990677 +0000 UTC m=+20.022155224" lastFinishedPulling="2025-11-05 04:54:55.168654335 +0000 UTC m=+46.035818882" observedRunningTime="2025-11-05 04:54:56.167750604 +0000 UTC m=+47.034915151" watchObservedRunningTime="2025-11-05 04:54:56.193695666 +0000 UTC m=+47.060860213" Nov 5 04:54:56.224447 kubelet[2853]: E1105 04:54:56.224392 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:56.225044 containerd[1646]: time="2025-11-05T04:54:56.224847257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qh59c,Uid:464b477f-76ea-4367-9f43-e0e7912c23ef,Namespace:kube-system,Attempt:0,}" Nov 5 04:54:56.225813 containerd[1646]: time="2025-11-05T04:54:56.225619266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68789659cc-mdcdc,Uid:bde487fb-2c18-4dab-a763-3054070918ea,Namespace:calico-apiserver,Attempt:0,}" Nov 5 04:54:56.226517 kubelet[2853]: E1105 04:54:56.226328 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:56.228468 containerd[1646]: time="2025-11-05T04:54:56.227900055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hxbsg,Uid:01310699-6728-4107-ba1d-e6a505bd0d5a,Namespace:kube-system,Attempt:0,}" Nov 5 04:54:56.378417 kubelet[2853]: I1105 04:54:56.378103 2853 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjn4p\" (UniqueName: \"kubernetes.io/projected/8530303d-8858-42cd-8a4b-17efdc4905d8-kube-api-access-xjn4p\") pod \"8530303d-8858-42cd-8a4b-17efdc4905d8\" (UID: \"8530303d-8858-42cd-8a4b-17efdc4905d8\") " Nov 5 04:54:56.378417 kubelet[2853]: I1105 04:54:56.378150 2853 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8530303d-8858-42cd-8a4b-17efdc4905d8-whisker-ca-bundle\") pod \"8530303d-8858-42cd-8a4b-17efdc4905d8\" (UID: \"8530303d-8858-42cd-8a4b-17efdc4905d8\") " Nov 5 04:54:56.378417 kubelet[2853]: I1105 04:54:56.378182 2853 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8530303d-8858-42cd-8a4b-17efdc4905d8-whisker-backend-key-pair\") pod \"8530303d-8858-42cd-8a4b-17efdc4905d8\" (UID: \"8530303d-8858-42cd-8a4b-17efdc4905d8\") " Nov 5 04:54:56.378797 kubelet[2853]: I1105 04:54:56.378726 2853 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8530303d-8858-42cd-8a4b-17efdc4905d8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8530303d-8858-42cd-8a4b-17efdc4905d8" (UID: "8530303d-8858-42cd-8a4b-17efdc4905d8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 04:54:56.386763 kubelet[2853]: I1105 04:54:56.386626 2853 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8530303d-8858-42cd-8a4b-17efdc4905d8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8530303d-8858-42cd-8a4b-17efdc4905d8" (UID: "8530303d-8858-42cd-8a4b-17efdc4905d8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 04:54:56.388340 systemd[1]: var-lib-kubelet-pods-8530303d\x2d8858\x2d42cd\x2d8a4b\x2d17efdc4905d8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 04:54:56.390529 kubelet[2853]: I1105 04:54:56.390473 2853 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8530303d-8858-42cd-8a4b-17efdc4905d8-kube-api-access-xjn4p" (OuterVolumeSpecName: "kube-api-access-xjn4p") pod "8530303d-8858-42cd-8a4b-17efdc4905d8" (UID: "8530303d-8858-42cd-8a4b-17efdc4905d8"). InnerVolumeSpecName "kube-api-access-xjn4p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 04:54:56.479665 kubelet[2853]: I1105 04:54:56.479072 2853 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xjn4p\" (UniqueName: \"kubernetes.io/projected/8530303d-8858-42cd-8a4b-17efdc4905d8-kube-api-access-xjn4p\") on node \"localhost\" DevicePath \"\"" Nov 5 04:54:56.479665 kubelet[2853]: I1105 04:54:56.479129 2853 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8530303d-8858-42cd-8a4b-17efdc4905d8-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 5 04:54:56.479665 kubelet[2853]: I1105 04:54:56.479150 2853 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8530303d-8858-42cd-8a4b-17efdc4905d8-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 5 04:54:56.544308 systemd-networkd[1531]: calib9c6f9b96ce: Link UP Nov 5 04:54:56.545219 systemd-networkd[1531]: calib9c6f9b96ce: Gained carrier Nov 5 04:54:56.557936 containerd[1646]: 2025-11-05 04:54:56.278 [INFO][4099] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 04:54:56.557936 containerd[1646]: 2025-11-05 04:54:56.336 [INFO][4099] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--hxbsg-eth0 coredns-674b8bbfcf- kube-system 01310699-6728-4107-ba1d-e6a505bd0d5a 850 0 2025-11-05 04:54:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-hxbsg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib9c6f9b96ce [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" Namespace="kube-system" Pod="coredns-674b8bbfcf-hxbsg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hxbsg-" Nov 5 04:54:56.557936 containerd[1646]: 2025-11-05 04:54:56.336 [INFO][4099] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" Namespace="kube-system" Pod="coredns-674b8bbfcf-hxbsg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hxbsg-eth0" Nov 5 04:54:56.557936 containerd[1646]: 2025-11-05 04:54:56.482 [INFO][4138] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" HandleID="k8s-pod-network.981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" Workload="localhost-k8s-coredns--674b8bbfcf--hxbsg-eth0" Nov 5 04:54:56.558231 containerd[1646]: 2025-11-05 04:54:56.483 [INFO][4138] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" HandleID="k8s-pod-network.981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" Workload="localhost-k8s-coredns--674b8bbfcf--hxbsg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041e190), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-hxbsg", "timestamp":"2025-11-05 04:54:56.482730143 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:54:56.558231 containerd[1646]: 2025-11-05 04:54:56.483 [INFO][4138] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:54:56.558231 containerd[1646]: 2025-11-05 04:54:56.484 [INFO][4138] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:54:56.558231 containerd[1646]: 2025-11-05 04:54:56.484 [INFO][4138] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:54:56.558231 containerd[1646]: 2025-11-05 04:54:56.496 [INFO][4138] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" host="localhost" Nov 5 04:54:56.558231 containerd[1646]: 2025-11-05 04:54:56.506 [INFO][4138] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:54:56.558231 containerd[1646]: 2025-11-05 04:54:56.511 [INFO][4138] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:54:56.558231 containerd[1646]: 2025-11-05 04:54:56.513 [INFO][4138] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:54:56.558231 containerd[1646]: 2025-11-05 04:54:56.515 [INFO][4138] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:54:56.558231 containerd[1646]: 2025-11-05 04:54:56.515 [INFO][4138] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" host="localhost" Nov 5 04:54:56.558585 containerd[1646]: 2025-11-05 04:54:56.518 [INFO][4138] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a Nov 5 04:54:56.558585 containerd[1646]: 2025-11-05 04:54:56.522 [INFO][4138] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" host="localhost" Nov 5 04:54:56.558585 containerd[1646]: 2025-11-05 04:54:56.528 [INFO][4138] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" host="localhost" Nov 5 04:54:56.558585 containerd[1646]: 2025-11-05 04:54:56.529 [INFO][4138] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" host="localhost" Nov 5 04:54:56.558585 containerd[1646]: 2025-11-05 04:54:56.529 [INFO][4138] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:54:56.558585 containerd[1646]: 2025-11-05 04:54:56.529 [INFO][4138] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" HandleID="k8s-pod-network.981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" Workload="localhost-k8s-coredns--674b8bbfcf--hxbsg-eth0" Nov 5 04:54:56.558734 containerd[1646]: 2025-11-05 04:54:56.535 [INFO][4099] cni-plugin/k8s.go 418: Populated endpoint ContainerID="981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" Namespace="kube-system" Pod="coredns-674b8bbfcf-hxbsg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hxbsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hxbsg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"01310699-6728-4107-ba1d-e6a505bd0d5a", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-hxbsg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9c6f9b96ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:54:56.558886 containerd[1646]: 2025-11-05 04:54:56.535 [INFO][4099] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" Namespace="kube-system" Pod="coredns-674b8bbfcf-hxbsg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hxbsg-eth0" Nov 5 04:54:56.558886 containerd[1646]: 2025-11-05 04:54:56.535 [INFO][4099] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9c6f9b96ce ContainerID="981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" Namespace="kube-system" Pod="coredns-674b8bbfcf-hxbsg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hxbsg-eth0" Nov 5 04:54:56.558886 containerd[1646]: 2025-11-05 04:54:56.545 [INFO][4099] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" Namespace="kube-system" Pod="coredns-674b8bbfcf-hxbsg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hxbsg-eth0" Nov 5 04:54:56.558969 containerd[1646]: 2025-11-05 04:54:56.546 [INFO][4099] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" Namespace="kube-system" Pod="coredns-674b8bbfcf-hxbsg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hxbsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hxbsg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"01310699-6728-4107-ba1d-e6a505bd0d5a", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a", Pod:"coredns-674b8bbfcf-hxbsg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9c6f9b96ce", MAC:"76:29:58:02:fd:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:54:56.558969 containerd[1646]: 2025-11-05 04:54:56.554 [INFO][4099] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" Namespace="kube-system" Pod="coredns-674b8bbfcf-hxbsg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hxbsg-eth0" Nov 5 04:54:56.639503 systemd-networkd[1531]: cali0982df2afeb: Link UP Nov 5 04:54:56.640078 systemd-networkd[1531]: cali0982df2afeb: Gained carrier Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.272 [INFO][4108] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.334 [INFO][4108] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--qh59c-eth0 coredns-674b8bbfcf- kube-system 464b477f-76ea-4367-9f43-e0e7912c23ef 842 0 2025-11-05 04:54:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-qh59c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0982df2afeb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" Namespace="kube-system" Pod="coredns-674b8bbfcf-qh59c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qh59c-" Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.334 [INFO][4108] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" Namespace="kube-system" Pod="coredns-674b8bbfcf-qh59c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qh59c-eth0" Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.482 [INFO][4140] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" HandleID="k8s-pod-network.69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" Workload="localhost-k8s-coredns--674b8bbfcf--qh59c-eth0" Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.483 [INFO][4140] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" HandleID="k8s-pod-network.69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" Workload="localhost-k8s-coredns--674b8bbfcf--qh59c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ab440), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-qh59c", "timestamp":"2025-11-05 04:54:56.482987627 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.484 [INFO][4140] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.529 [INFO][4140] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.529 [INFO][4140] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.596 [INFO][4140] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" host="localhost" Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.606 [INFO][4140] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.611 [INFO][4140] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.613 [INFO][4140] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.615 [INFO][4140] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.615 [INFO][4140] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" host="localhost" Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.616 [INFO][4140] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02 Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.620 [INFO][4140] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" host="localhost" Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.627 [INFO][4140] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" host="localhost" Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.627 [INFO][4140] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" host="localhost" Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.628 [INFO][4140] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:54:56.652948 containerd[1646]: 2025-11-05 04:54:56.628 [INFO][4140] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" HandleID="k8s-pod-network.69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" Workload="localhost-k8s-coredns--674b8bbfcf--qh59c-eth0" Nov 5 04:54:56.653619 containerd[1646]: 2025-11-05 04:54:56.636 [INFO][4108] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" Namespace="kube-system" Pod="coredns-674b8bbfcf-qh59c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qh59c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qh59c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"464b477f-76ea-4367-9f43-e0e7912c23ef", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-qh59c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0982df2afeb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:54:56.653619 containerd[1646]: 2025-11-05 04:54:56.636 [INFO][4108] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" Namespace="kube-system" Pod="coredns-674b8bbfcf-qh59c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qh59c-eth0" Nov 5 04:54:56.653619 containerd[1646]: 2025-11-05 04:54:56.636 [INFO][4108] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0982df2afeb ContainerID="69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" Namespace="kube-system" Pod="coredns-674b8bbfcf-qh59c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qh59c-eth0" Nov 5 04:54:56.653619 containerd[1646]: 2025-11-05 04:54:56.639 [INFO][4108] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" Namespace="kube-system" Pod="coredns-674b8bbfcf-qh59c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qh59c-eth0" Nov 5 04:54:56.653619 containerd[1646]: 2025-11-05 04:54:56.640 [INFO][4108] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" Namespace="kube-system" Pod="coredns-674b8bbfcf-qh59c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qh59c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qh59c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"464b477f-76ea-4367-9f43-e0e7912c23ef", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02", Pod:"coredns-674b8bbfcf-qh59c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0982df2afeb", MAC:"a6:a3:31:f9:ac:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:54:56.653619 containerd[1646]: 2025-11-05 04:54:56.649 [INFO][4108] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" Namespace="kube-system" Pod="coredns-674b8bbfcf-qh59c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qh59c-eth0" Nov 5 04:54:56.691046 containerd[1646]: time="2025-11-05T04:54:56.690931850Z" level=info msg="connecting to shim 981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a" address="unix:///run/containerd/s/fc3f7cf653d76f8ebef69b96a1afa9e3d86811b820d6f971bea346ce1b829567" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:54:56.691191 containerd[1646]: time="2025-11-05T04:54:56.690982576Z" level=info msg="connecting to shim 69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02" address="unix:///run/containerd/s/aad594a50b5665b19fa245561bc7e5ae1df5d038ab307a59cdc827249ff7c4cd" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:54:56.738407 systemd-networkd[1531]: caliea5a06963eb: Link UP Nov 5 04:54:56.742140 systemd-networkd[1531]: caliea5a06963eb: Gained carrier Nov 5 04:54:56.763421 systemd[1]: Started cri-containerd-69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02.scope - libcontainer container 69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02. Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.419 [INFO][4130] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.434 [INFO][4130] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68789659cc--mdcdc-eth0 calico-apiserver-68789659cc- calico-apiserver bde487fb-2c18-4dab-a763-3054070918ea 847 0 2025-11-05 04:54:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68789659cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68789659cc-mdcdc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliea5a06963eb [] [] }} ContainerID="b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-mdcdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--mdcdc-" Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.434 [INFO][4130] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-mdcdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--mdcdc-eth0" Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.496 [INFO][4161] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" HandleID="k8s-pod-network.b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" Workload="localhost-k8s-calico--apiserver--68789659cc--mdcdc-eth0" Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.496 [INFO][4161] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" HandleID="k8s-pod-network.b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" Workload="localhost-k8s-calico--apiserver--68789659cc--mdcdc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000528b80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68789659cc-mdcdc", "timestamp":"2025-11-05 04:54:56.496211749 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.496 [INFO][4161] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.627 [INFO][4161] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.627 [INFO][4161] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.697 [INFO][4161] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" host="localhost" Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.708 [INFO][4161] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.713 [INFO][4161] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.716 [INFO][4161] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.718 [INFO][4161] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.718 [INFO][4161] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" host="localhost" Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.719 [INFO][4161] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1 Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.723 [INFO][4161] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" host="localhost" Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.730 [INFO][4161] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" host="localhost" Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.730 [INFO][4161] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" host="localhost" Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.730 [INFO][4161] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:54:56.769525 containerd[1646]: 2025-11-05 04:54:56.730 [INFO][4161] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" HandleID="k8s-pod-network.b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" Workload="localhost-k8s-calico--apiserver--68789659cc--mdcdc-eth0" Nov 5 04:54:56.770493 containerd[1646]: 2025-11-05 04:54:56.734 [INFO][4130] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-mdcdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--mdcdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68789659cc--mdcdc-eth0", GenerateName:"calico-apiserver-68789659cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"bde487fb-2c18-4dab-a763-3054070918ea", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68789659cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68789659cc-mdcdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea5a06963eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:54:56.770493 containerd[1646]: 2025-11-05 04:54:56.734 [INFO][4130] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-mdcdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--mdcdc-eth0" Nov 5 04:54:56.770493 containerd[1646]: 2025-11-05 04:54:56.734 [INFO][4130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea5a06963eb ContainerID="b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-mdcdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--mdcdc-eth0" Nov 5 04:54:56.770493 containerd[1646]: 2025-11-05 04:54:56.750 [INFO][4130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-mdcdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--mdcdc-eth0" Nov 5 04:54:56.770493 containerd[1646]: 2025-11-05 04:54:56.755 [INFO][4130] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-mdcdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--mdcdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68789659cc--mdcdc-eth0", GenerateName:"calico-apiserver-68789659cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"bde487fb-2c18-4dab-a763-3054070918ea", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68789659cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1", Pod:"calico-apiserver-68789659cc-mdcdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea5a06963eb", MAC:"4a:5d:99:1c:9f:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:54:56.770493 containerd[1646]: 2025-11-05 04:54:56.765 [INFO][4130] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-mdcdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--mdcdc-eth0" Nov 5 04:54:56.779163 systemd[1]: Started cri-containerd-981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a.scope - libcontainer container 981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a. Nov 5 04:54:56.791793 containerd[1646]: time="2025-11-05T04:54:56.791737005Z" level=info msg="connecting to shim b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1" address="unix:///run/containerd/s/0febfb014464c7989523d14289c417f510e53cfea70978acc317847fe79acb45" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:54:56.793398 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:54:56.796774 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:54:56.819091 systemd[1]: Started cri-containerd-b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1.scope - libcontainer container b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1. Nov 5 04:54:56.841080 containerd[1646]: time="2025-11-05T04:54:56.840836993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qh59c,Uid:464b477f-76ea-4367-9f43-e0e7912c23ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02\"" Nov 5 04:54:56.841597 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:54:56.843262 kubelet[2853]: E1105 04:54:56.843026 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:56.844876 containerd[1646]: time="2025-11-05T04:54:56.844775322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hxbsg,Uid:01310699-6728-4107-ba1d-e6a505bd0d5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a\"" Nov 5 04:54:56.847182 kubelet[2853]: E1105 04:54:56.847081 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:56.852337 containerd[1646]: time="2025-11-05T04:54:56.852291686Z" level=info msg="CreateContainer within sandbox \"69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 04:54:56.853752 containerd[1646]: time="2025-11-05T04:54:56.853688256Z" level=info msg="CreateContainer within sandbox \"981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 04:54:56.868113 containerd[1646]: time="2025-11-05T04:54:56.868061015Z" level=info msg="Container 271349dc4136ea82642cb4e0fd222ab8ead10cd0fee56f415b5fb90c64079b1c: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:54:56.871482 containerd[1646]: time="2025-11-05T04:54:56.870903528Z" level=info msg="Container 256f241e2d70f0069f795725f28b8e3c81af62dff36299fe58c3df613630198c: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:54:56.875238 containerd[1646]: time="2025-11-05T04:54:56.875094181Z" level=info msg="CreateContainer within sandbox \"69570f745bb6f00d1ac6100c8c91e05c6afddd772cee46457d5c1b123da2cf02\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"271349dc4136ea82642cb4e0fd222ab8ead10cd0fee56f415b5fb90c64079b1c\"" Nov 5 04:54:56.876812 containerd[1646]: time="2025-11-05T04:54:56.876769985Z" level=info msg="StartContainer for \"271349dc4136ea82642cb4e0fd222ab8ead10cd0fee56f415b5fb90c64079b1c\"" Nov 5 04:54:56.879817 containerd[1646]: time="2025-11-05T04:54:56.879396965Z" level=info msg="CreateContainer within sandbox \"981527757a8d95007d7c55f432cfe1898c37bf3178666dd732b669062eb37f9a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"256f241e2d70f0069f795725f28b8e3c81af62dff36299fe58c3df613630198c\"" Nov 5 04:54:56.881932 containerd[1646]: time="2025-11-05T04:54:56.881899751Z" level=info msg="StartContainer for \"256f241e2d70f0069f795725f28b8e3c81af62dff36299fe58c3df613630198c\"" Nov 5 04:54:56.882064 containerd[1646]: time="2025-11-05T04:54:56.882036597Z" level=info msg="connecting to shim 271349dc4136ea82642cb4e0fd222ab8ead10cd0fee56f415b5fb90c64079b1c" address="unix:///run/containerd/s/aad594a50b5665b19fa245561bc7e5ae1df5d038ab307a59cdc827249ff7c4cd" protocol=ttrpc version=3 Nov 5 04:54:56.886333 containerd[1646]: time="2025-11-05T04:54:56.886190321Z" level=info msg="connecting to shim 256f241e2d70f0069f795725f28b8e3c81af62dff36299fe58c3df613630198c" address="unix:///run/containerd/s/fc3f7cf653d76f8ebef69b96a1afa9e3d86811b820d6f971bea346ce1b829567" protocol=ttrpc version=3 Nov 5 04:54:56.888817 containerd[1646]: time="2025-11-05T04:54:56.888035643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68789659cc-mdcdc,Uid:bde487fb-2c18-4dab-a763-3054070918ea,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b53e2ba8b34fbda35d56340e2a2108d4f05573c97accef59ea7023206968b8e1\"" Nov 5 04:54:56.890459 containerd[1646]: time="2025-11-05T04:54:56.890318498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:54:56.913516 kubelet[2853]: E1105 04:54:56.913460 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:56.914920 systemd[1]: Started cri-containerd-256f241e2d70f0069f795725f28b8e3c81af62dff36299fe58c3df613630198c.scope - libcontainer container 256f241e2d70f0069f795725f28b8e3c81af62dff36299fe58c3df613630198c. Nov 5 04:54:56.924010 systemd[1]: Started cri-containerd-271349dc4136ea82642cb4e0fd222ab8ead10cd0fee56f415b5fb90c64079b1c.scope - libcontainer container 271349dc4136ea82642cb4e0fd222ab8ead10cd0fee56f415b5fb90c64079b1c. Nov 5 04:54:56.926672 systemd[1]: Removed slice kubepods-besteffort-pod8530303d_8858_42cd_8a4b_17efdc4905d8.slice - libcontainer container kubepods-besteffort-pod8530303d_8858_42cd_8a4b_17efdc4905d8.slice. Nov 5 04:54:56.968426 containerd[1646]: time="2025-11-05T04:54:56.968384916Z" level=info msg="StartContainer for \"256f241e2d70f0069f795725f28b8e3c81af62dff36299fe58c3df613630198c\" returns successfully" Nov 5 04:54:57.001177 systemd[1]: Created slice kubepods-besteffort-pod26f1612f_9150_497a_bbb2_ec9dde0e0a53.slice - libcontainer container kubepods-besteffort-pod26f1612f_9150_497a_bbb2_ec9dde0e0a53.slice. Nov 5 04:54:57.022401 containerd[1646]: time="2025-11-05T04:54:57.022147321Z" level=info msg="StartContainer for \"271349dc4136ea82642cb4e0fd222ab8ead10cd0fee56f415b5fb90c64079b1c\" returns successfully" Nov 5 04:54:57.084946 kubelet[2853]: I1105 04:54:57.083670 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mnn8\" (UniqueName: \"kubernetes.io/projected/26f1612f-9150-497a-bbb2-ec9dde0e0a53-kube-api-access-7mnn8\") pod \"whisker-56c77d6c74-6bvgx\" (UID: \"26f1612f-9150-497a-bbb2-ec9dde0e0a53\") " pod="calico-system/whisker-56c77d6c74-6bvgx" Nov 5 04:54:57.084946 kubelet[2853]: I1105 04:54:57.083724 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26f1612f-9150-497a-bbb2-ec9dde0e0a53-whisker-ca-bundle\") pod \"whisker-56c77d6c74-6bvgx\" (UID: \"26f1612f-9150-497a-bbb2-ec9dde0e0a53\") " pod="calico-system/whisker-56c77d6c74-6bvgx" Nov 5 04:54:57.084946 kubelet[2853]: I1105 04:54:57.083756 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/26f1612f-9150-497a-bbb2-ec9dde0e0a53-whisker-backend-key-pair\") pod \"whisker-56c77d6c74-6bvgx\" (UID: \"26f1612f-9150-497a-bbb2-ec9dde0e0a53\") " pod="calico-system/whisker-56c77d6c74-6bvgx" Nov 5 04:54:57.095347 systemd[1]: var-lib-kubelet-pods-8530303d\x2d8858\x2d42cd\x2d8a4b\x2d17efdc4905d8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxjn4p.mount: Deactivated successfully. Nov 5 04:54:57.223984 containerd[1646]: time="2025-11-05T04:54:57.223935107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vtqjl,Uid:bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c,Namespace:calico-system,Attempt:0,}" Nov 5 04:54:57.224382 containerd[1646]: time="2025-11-05T04:54:57.224294150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chq6c,Uid:2c81581c-25f7-472a-9631-e6c9dfccb268,Namespace:calico-system,Attempt:0,}" Nov 5 04:54:57.225572 containerd[1646]: time="2025-11-05T04:54:57.225544537Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:54:57.225993 kubelet[2853]: I1105 04:54:57.225961 2853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8530303d-8858-42cd-8a4b-17efdc4905d8" path="/var/lib/kubelet/pods/8530303d-8858-42cd-8a4b-17efdc4905d8/volumes" Nov 5 04:54:57.306325 containerd[1646]: time="2025-11-05T04:54:57.306201239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56c77d6c74-6bvgx,Uid:26f1612f-9150-497a-bbb2-ec9dde0e0a53,Namespace:calico-system,Attempt:0,}" Nov 5 04:54:57.783123 containerd[1646]: time="2025-11-05T04:54:57.783052343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:54:57.783377 containerd[1646]: time="2025-11-05T04:54:57.783134638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:54:57.949188 kubelet[2853]: E1105 04:54:57.783623 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:54:57.949188 kubelet[2853]: E1105 04:54:57.783709 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:54:57.790138 systemd-networkd[1531]: caliea5a06963eb: Gained IPv6LL Nov 5 04:54:57.949680 kubelet[2853]: E1105 04:54:57.784607 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8h4xs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68789659cc-mdcdc_calico-apiserver(bde487fb-2c18-4dab-a763-3054070918ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:54:57.949680 kubelet[2853]: E1105 04:54:57.785984 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68789659cc-mdcdc" podUID="bde487fb-2c18-4dab-a763-3054070918ea" Nov 5 04:54:57.949680 kubelet[2853]: E1105 04:54:57.916655 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:57.949680 kubelet[2853]: E1105 04:54:57.919027 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:57.949680 kubelet[2853]: E1105 04:54:57.919211 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68789659cc-mdcdc" podUID="bde487fb-2c18-4dab-a763-3054070918ea" Nov 5 04:54:58.056972 systemd-networkd[1531]: vxlan.calico: Link UP Nov 5 04:54:58.057001 systemd-networkd[1531]: vxlan.calico: Gained carrier Nov 5 04:54:58.110766 systemd-networkd[1531]: calib9c6f9b96ce: Gained IPv6LL Nov 5 04:54:58.121593 kubelet[2853]: I1105 04:54:58.120661 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qh59c" podStartSLOduration=42.120640867 podStartE2EDuration="42.120640867s" podCreationTimestamp="2025-11-05 04:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:54:57.995536799 +0000 UTC m=+48.862701346" watchObservedRunningTime="2025-11-05 04:54:58.120640867 +0000 UTC m=+48.987805414" Nov 5 04:54:58.161814 kubelet[2853]: I1105 04:54:58.161702 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hxbsg" podStartSLOduration=42.161679633 podStartE2EDuration="42.161679633s" podCreationTimestamp="2025-11-05 04:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:54:58.145790481 +0000 UTC m=+49.012955029" watchObservedRunningTime="2025-11-05 04:54:58.161679633 +0000 UTC m=+49.028844180" Nov 5 04:54:58.324555 systemd-networkd[1531]: cali033b53c8154: Link UP Nov 5 04:54:58.325397 systemd-networkd[1531]: cali033b53c8154: Gained carrier Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.222 [INFO][4626] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--56c77d6c74--6bvgx-eth0 whisker-56c77d6c74- calico-system 26f1612f-9150-497a-bbb2-ec9dde0e0a53 957 0 2025-11-05 04:54:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:56c77d6c74 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-56c77d6c74-6bvgx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali033b53c8154 [] [] }} ContainerID="b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" Namespace="calico-system" Pod="whisker-56c77d6c74-6bvgx" WorkloadEndpoint="localhost-k8s-whisker--56c77d6c74--6bvgx-" Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.223 [INFO][4626] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" Namespace="calico-system" Pod="whisker-56c77d6c74-6bvgx" WorkloadEndpoint="localhost-k8s-whisker--56c77d6c74--6bvgx-eth0" Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.278 [INFO][4664] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" HandleID="k8s-pod-network.b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" Workload="localhost-k8s-whisker--56c77d6c74--6bvgx-eth0" Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.279 [INFO][4664] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" HandleID="k8s-pod-network.b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" Workload="localhost-k8s-whisker--56c77d6c74--6bvgx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026d7e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-56c77d6c74-6bvgx", "timestamp":"2025-11-05 04:54:58.278600277 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.279 [INFO][4664] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.279 [INFO][4664] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.279 [INFO][4664] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.290 [INFO][4664] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" host="localhost" Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.296 [INFO][4664] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.301 [INFO][4664] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.302 [INFO][4664] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.305 [INFO][4664] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.305 [INFO][4664] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" host="localhost" Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.306 [INFO][4664] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672 Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.310 [INFO][4664] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" host="localhost" Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.315 [INFO][4664] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" host="localhost" Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.315 [INFO][4664] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" host="localhost" Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.315 [INFO][4664] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:54:58.344150 containerd[1646]: 2025-11-05 04:54:58.315 [INFO][4664] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" HandleID="k8s-pod-network.b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" Workload="localhost-k8s-whisker--56c77d6c74--6bvgx-eth0" Nov 5 04:54:58.345241 containerd[1646]: 2025-11-05 04:54:58.320 [INFO][4626] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" Namespace="calico-system" Pod="whisker-56c77d6c74-6bvgx" WorkloadEndpoint="localhost-k8s-whisker--56c77d6c74--6bvgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--56c77d6c74--6bvgx-eth0", GenerateName:"whisker-56c77d6c74-", Namespace:"calico-system", SelfLink:"", UID:"26f1612f-9150-497a-bbb2-ec9dde0e0a53", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56c77d6c74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-56c77d6c74-6bvgx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali033b53c8154", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:54:58.345241 containerd[1646]: 2025-11-05 04:54:58.321 [INFO][4626] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" Namespace="calico-system" Pod="whisker-56c77d6c74-6bvgx" WorkloadEndpoint="localhost-k8s-whisker--56c77d6c74--6bvgx-eth0" Nov 5 04:54:58.345241 containerd[1646]: 2025-11-05 04:54:58.321 [INFO][4626] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali033b53c8154 ContainerID="b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" Namespace="calico-system" Pod="whisker-56c77d6c74-6bvgx" WorkloadEndpoint="localhost-k8s-whisker--56c77d6c74--6bvgx-eth0" Nov 5 04:54:58.345241 containerd[1646]: 2025-11-05 04:54:58.326 [INFO][4626] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" Namespace="calico-system" Pod="whisker-56c77d6c74-6bvgx" WorkloadEndpoint="localhost-k8s-whisker--56c77d6c74--6bvgx-eth0" Nov 5 04:54:58.345241 containerd[1646]: 2025-11-05 04:54:58.333 [INFO][4626] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" Namespace="calico-system" Pod="whisker-56c77d6c74-6bvgx" WorkloadEndpoint="localhost-k8s-whisker--56c77d6c74--6bvgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--56c77d6c74--6bvgx-eth0", GenerateName:"whisker-56c77d6c74-", Namespace:"calico-system", SelfLink:"", UID:"26f1612f-9150-497a-bbb2-ec9dde0e0a53", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56c77d6c74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672", Pod:"whisker-56c77d6c74-6bvgx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali033b53c8154", MAC:"f2:87:a3:a6:70:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:54:58.345241 containerd[1646]: 2025-11-05 04:54:58.340 [INFO][4626] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" Namespace="calico-system" Pod="whisker-56c77d6c74-6bvgx" WorkloadEndpoint="localhost-k8s-whisker--56c77d6c74--6bvgx-eth0" Nov 5 04:54:58.430019 systemd-networkd[1531]: cali0982df2afeb: Gained IPv6LL Nov 5 04:54:58.501176 systemd-networkd[1531]: calid97746cb5d7: Link UP Nov 5 04:54:58.503114 systemd-networkd[1531]: calid97746cb5d7: Gained carrier Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.222 [INFO][4624] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--chq6c-eth0 csi-node-driver- calico-system 2c81581c-25f7-472a-9631-e6c9dfccb268 719 0 2025-11-05 04:54:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-chq6c eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid97746cb5d7 [] [] }} ContainerID="6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" Namespace="calico-system" Pod="csi-node-driver-chq6c" WorkloadEndpoint="localhost-k8s-csi--node--driver--chq6c-" Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.222 [INFO][4624] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" Namespace="calico-system" Pod="csi-node-driver-chq6c" WorkloadEndpoint="localhost-k8s-csi--node--driver--chq6c-eth0" Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.291 [INFO][4662] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" HandleID="k8s-pod-network.6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" Workload="localhost-k8s-csi--node--driver--chq6c-eth0" Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.291 [INFO][4662] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" HandleID="k8s-pod-network.6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" Workload="localhost-k8s-csi--node--driver--chq6c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-chq6c", "timestamp":"2025-11-05 04:54:58.291118273 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.291 [INFO][4662] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.316 [INFO][4662] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.316 [INFO][4662] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.389 [INFO][4662] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" host="localhost" Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.397 [INFO][4662] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.402 [INFO][4662] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.404 [INFO][4662] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.406 [INFO][4662] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.406 [INFO][4662] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" host="localhost" Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.408 [INFO][4662] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366 Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.466 [INFO][4662] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" host="localhost" Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.489 [INFO][4662] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" host="localhost" Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.489 [INFO][4662] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" host="localhost" Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.489 [INFO][4662] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:54:58.532165 containerd[1646]: 2025-11-05 04:54:58.489 [INFO][4662] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" HandleID="k8s-pod-network.6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" Workload="localhost-k8s-csi--node--driver--chq6c-eth0" Nov 5 04:54:58.535309 containerd[1646]: 2025-11-05 04:54:58.493 [INFO][4624] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" Namespace="calico-system" Pod="csi-node-driver-chq6c" WorkloadEndpoint="localhost-k8s-csi--node--driver--chq6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--chq6c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c81581c-25f7-472a-9631-e6c9dfccb268", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-chq6c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid97746cb5d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:54:58.535309 containerd[1646]: 2025-11-05 04:54:58.493 [INFO][4624] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" Namespace="calico-system" Pod="csi-node-driver-chq6c" WorkloadEndpoint="localhost-k8s-csi--node--driver--chq6c-eth0" Nov 5 04:54:58.535309 containerd[1646]: 2025-11-05 04:54:58.493 [INFO][4624] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid97746cb5d7 ContainerID="6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" Namespace="calico-system" Pod="csi-node-driver-chq6c" WorkloadEndpoint="localhost-k8s-csi--node--driver--chq6c-eth0" Nov 5 04:54:58.535309 containerd[1646]: 2025-11-05 04:54:58.505 [INFO][4624] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" Namespace="calico-system" Pod="csi-node-driver-chq6c" WorkloadEndpoint="localhost-k8s-csi--node--driver--chq6c-eth0" Nov 5 04:54:58.535309 containerd[1646]: 2025-11-05 04:54:58.506 [INFO][4624] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" Namespace="calico-system" Pod="csi-node-driver-chq6c" WorkloadEndpoint="localhost-k8s-csi--node--driver--chq6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--chq6c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c81581c-25f7-472a-9631-e6c9dfccb268", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366", Pod:"csi-node-driver-chq6c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid97746cb5d7", MAC:"0e:39:a7:a3:5f:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:54:58.535309 containerd[1646]: 2025-11-05 04:54:58.522 [INFO][4624] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" Namespace="calico-system" Pod="csi-node-driver-chq6c" WorkloadEndpoint="localhost-k8s-csi--node--driver--chq6c-eth0" Nov 5 04:54:58.594838 systemd-networkd[1531]: cali9186e657270: Link UP Nov 5 04:54:58.600341 systemd-networkd[1531]: cali9186e657270: Gained carrier Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.221 [INFO][4613] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--vtqjl-eth0 goldmane-666569f655- calico-system bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c 853 0 2025-11-05 04:54:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-vtqjl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9186e657270 [] [] }} ContainerID="582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" Namespace="calico-system" Pod="goldmane-666569f655-vtqjl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vtqjl-" Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.221 [INFO][4613] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" Namespace="calico-system" Pod="goldmane-666569f655-vtqjl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vtqjl-eth0" Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.292 [INFO][4660] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" HandleID="k8s-pod-network.582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" Workload="localhost-k8s-goldmane--666569f655--vtqjl-eth0" Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.292 [INFO][4660] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" HandleID="k8s-pod-network.582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" Workload="localhost-k8s-goldmane--666569f655--vtqjl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000128e90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-vtqjl", "timestamp":"2025-11-05 04:54:58.29213499 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.292 [INFO][4660] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.489 [INFO][4660] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.489 [INFO][4660] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.526 [INFO][4660] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" host="localhost" Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.536 [INFO][4660] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.543 [INFO][4660] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.545 [INFO][4660] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.548 [INFO][4660] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.548 [INFO][4660] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" host="localhost" Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.550 [INFO][4660] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73 Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.578 [INFO][4660] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" host="localhost" Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.587 [INFO][4660] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" host="localhost" Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.587 [INFO][4660] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" host="localhost" Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.587 [INFO][4660] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:54:58.692910 containerd[1646]: 2025-11-05 04:54:58.587 [INFO][4660] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" HandleID="k8s-pod-network.582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" Workload="localhost-k8s-goldmane--666569f655--vtqjl-eth0" Nov 5 04:54:58.693687 containerd[1646]: 2025-11-05 04:54:58.591 [INFO][4613] cni-plugin/k8s.go 418: Populated endpoint ContainerID="582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" Namespace="calico-system" Pod="goldmane-666569f655-vtqjl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vtqjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vtqjl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-vtqjl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9186e657270", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:54:58.693687 containerd[1646]: 2025-11-05 04:54:58.591 [INFO][4613] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" Namespace="calico-system" Pod="goldmane-666569f655-vtqjl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vtqjl-eth0" Nov 5 04:54:58.693687 containerd[1646]: 2025-11-05 04:54:58.591 [INFO][4613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9186e657270 ContainerID="582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" Namespace="calico-system" Pod="goldmane-666569f655-vtqjl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vtqjl-eth0" Nov 5 04:54:58.693687 containerd[1646]: 2025-11-05 04:54:58.597 [INFO][4613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" Namespace="calico-system" Pod="goldmane-666569f655-vtqjl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vtqjl-eth0" Nov 5 04:54:58.693687 containerd[1646]: 2025-11-05 04:54:58.597 [INFO][4613] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" Namespace="calico-system" Pod="goldmane-666569f655-vtqjl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vtqjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vtqjl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73", Pod:"goldmane-666569f655-vtqjl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9186e657270", MAC:"ae:b6:f7:c7:36:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:54:58.693687 containerd[1646]: 2025-11-05 04:54:58.685 [INFO][4613] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" Namespace="calico-system" Pod="goldmane-666569f655-vtqjl" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vtqjl-eth0" Nov 5 04:54:58.718060 containerd[1646]: time="2025-11-05T04:54:58.717995721Z" level=info msg="connecting to shim b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672" address="unix:///run/containerd/s/6418c1215c23fc51c09ab467e639ca87fabba71404c2f061de496b7e79cb7446" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:54:58.746942 containerd[1646]: time="2025-11-05T04:54:58.746259128Z" level=info msg="connecting to shim 6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366" address="unix:///run/containerd/s/e21c308b3d114957ad037378a80e7e9659b6f13d86281544c8155201a489db60" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:54:58.761878 containerd[1646]: time="2025-11-05T04:54:58.761791901Z" level=info msg="connecting to shim 582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73" address="unix:///run/containerd/s/28f8ee2925fe06e692837c735d6c032bbaf6c8cfa1ffd503d390c04ab31aa1be" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:54:58.767265 systemd[1]: Started cri-containerd-b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672.scope - libcontainer container b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672. Nov 5 04:54:58.785366 systemd[1]: Started cri-containerd-6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366.scope - libcontainer container 6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366. Nov 5 04:54:58.806508 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:54:58.817055 systemd[1]: Started cri-containerd-582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73.scope - libcontainer container 582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73. Nov 5 04:54:58.823127 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:54:58.840502 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:54:58.855231 containerd[1646]: time="2025-11-05T04:54:58.854971561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chq6c,Uid:2c81581c-25f7-472a-9631-e6c9dfccb268,Namespace:calico-system,Attempt:0,} returns sandbox id \"6be92fd41bdfdce0f9e47b583382f25222e30034427cf08dbd7d7d8b990a8366\"" Nov 5 04:54:58.860065 containerd[1646]: time="2025-11-05T04:54:58.859615394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 04:54:58.875148 containerd[1646]: time="2025-11-05T04:54:58.875096550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56c77d6c74-6bvgx,Uid:26f1612f-9150-497a-bbb2-ec9dde0e0a53,Namespace:calico-system,Attempt:0,} returns sandbox id \"b179ec3b9dbf6643ddbb17aae11a3da55ba00dd23e0ce9d9432990033c341672\"" Nov 5 04:54:58.880717 containerd[1646]: time="2025-11-05T04:54:58.880652243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vtqjl,Uid:bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c,Namespace:calico-system,Attempt:0,} returns sandbox id \"582c2c09b2bf359376003e936839055cc325df4af2745a4659baf4129600ef73\"" Nov 5 04:54:58.924554 kubelet[2853]: E1105 04:54:58.924501 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:58.925689 kubelet[2853]: E1105 04:54:58.925660 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:59.225184 containerd[1646]: time="2025-11-05T04:54:59.225017357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54bb96ccb8-sppjg,Uid:98e18d15-122f-4a59-81ce-7fb003c6fe97,Namespace:calico-system,Attempt:0,}" Nov 5 04:54:59.229984 containerd[1646]: time="2025-11-05T04:54:59.229925826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:54:59.234100 containerd[1646]: time="2025-11-05T04:54:59.233712279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 04:54:59.234100 containerd[1646]: time="2025-11-05T04:54:59.233796928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 5 04:54:59.234219 kubelet[2853]: E1105 04:54:59.234159 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:54:59.234275 kubelet[2853]: E1105 04:54:59.234228 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:54:59.234575 kubelet[2853]: E1105 04:54:59.234510 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5s56m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-chq6c_calico-system(2c81581c-25f7-472a-9631-e6c9dfccb268): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 04:54:59.235213 containerd[1646]: time="2025-11-05T04:54:59.235170115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 04:54:59.559843 containerd[1646]: time="2025-11-05T04:54:59.559776871Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:54:59.655575 containerd[1646]: time="2025-11-05T04:54:59.655480731Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 04:54:59.655757 containerd[1646]: time="2025-11-05T04:54:59.655494186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 5 04:54:59.655931 kubelet[2853]: E1105 04:54:59.655845 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:54:59.655931 kubelet[2853]: E1105 04:54:59.655920 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:54:59.656370 kubelet[2853]: E1105 04:54:59.656322 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a6af8f491a9646caa00c8d37bbf00fa6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mnn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-56c77d6c74-6bvgx_calico-system(26f1612f-9150-497a-bbb2-ec9dde0e0a53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 04:54:59.656871 containerd[1646]: time="2025-11-05T04:54:59.656816968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 04:54:59.838037 systemd-networkd[1531]: cali9186e657270: Gained IPv6LL Nov 5 04:54:59.926286 kubelet[2853]: E1105 04:54:59.926203 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:54:59.926286 kubelet[2853]: E1105 04:54:59.926247 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:55:00.013181 systemd-networkd[1531]: cali31d0b6dac71: Link UP Nov 5 04:55:00.014521 systemd-networkd[1531]: cali31d0b6dac71: Gained carrier Nov 5 04:55:00.094024 systemd-networkd[1531]: vxlan.calico: Gained IPv6LL Nov 5 04:55:00.113185 containerd[1646]: time="2025-11-05T04:55:00.113119921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:00.148110 containerd[1646]: time="2025-11-05T04:55:00.148054850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 04:55:00.148110 containerd[1646]: time="2025-11-05T04:55:00.148103311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:00.148329 kubelet[2853]: E1105 04:55:00.148292 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:55:00.148377 kubelet[2853]: E1105 04:55:00.148343 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:55:00.148637 kubelet[2853]: E1105 04:55:00.148580 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnjf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vtqjl_calico-system(bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:00.148935 containerd[1646]: time="2025-11-05T04:55:00.148730498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 04:55:00.149751 kubelet[2853]: E1105 04:55:00.149721 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtqjl" podUID="bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c" Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:54:59.578 [INFO][4884] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--54bb96ccb8--sppjg-eth0 calico-kube-controllers-54bb96ccb8- calico-system 98e18d15-122f-4a59-81ce-7fb003c6fe97 849 0 2025-11-05 04:54:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54bb96ccb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-54bb96ccb8-sppjg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali31d0b6dac71 [] [] }} ContainerID="a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" Namespace="calico-system" Pod="calico-kube-controllers-54bb96ccb8-sppjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bb96ccb8--sppjg-" Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:54:59.578 [INFO][4884] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" Namespace="calico-system" Pod="calico-kube-controllers-54bb96ccb8-sppjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bb96ccb8--sppjg-eth0" Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:54:59.612 [INFO][4901] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" HandleID="k8s-pod-network.a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" Workload="localhost-k8s-calico--kube--controllers--54bb96ccb8--sppjg-eth0" Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:54:59.612 [INFO][4901] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" HandleID="k8s-pod-network.a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" Workload="localhost-k8s-calico--kube--controllers--54bb96ccb8--sppjg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000134870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-54bb96ccb8-sppjg", "timestamp":"2025-11-05 04:54:59.612086639 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:54:59.612 [INFO][4901] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:54:59.612 [INFO][4901] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:54:59.612 [INFO][4901] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:54:59.619 [INFO][4901] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" host="localhost" Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:54:59.623 [INFO][4901] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:54:59.627 [INFO][4901] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:54:59.629 [INFO][4901] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:54:59.631 [INFO][4901] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:54:59.631 [INFO][4901] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" host="localhost" Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:54:59.633 [INFO][4901] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931 Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:54:59.657 [INFO][4901] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" host="localhost" Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:55:00.006 [INFO][4901] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" host="localhost" Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:55:00.006 [INFO][4901] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" host="localhost" Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:55:00.006 [INFO][4901] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:55:00.188393 containerd[1646]: 2025-11-05 04:55:00.006 [INFO][4901] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" HandleID="k8s-pod-network.a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" Workload="localhost-k8s-calico--kube--controllers--54bb96ccb8--sppjg-eth0" Nov 5 04:55:00.189935 containerd[1646]: 2025-11-05 04:55:00.010 [INFO][4884] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" Namespace="calico-system" Pod="calico-kube-controllers-54bb96ccb8-sppjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bb96ccb8--sppjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54bb96ccb8--sppjg-eth0", GenerateName:"calico-kube-controllers-54bb96ccb8-", Namespace:"calico-system", SelfLink:"", UID:"98e18d15-122f-4a59-81ce-7fb003c6fe97", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54bb96ccb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-54bb96ccb8-sppjg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali31d0b6dac71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:55:00.189935 containerd[1646]: 2025-11-05 04:55:00.010 [INFO][4884] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" Namespace="calico-system" Pod="calico-kube-controllers-54bb96ccb8-sppjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bb96ccb8--sppjg-eth0" Nov 5 04:55:00.189935 containerd[1646]: 2025-11-05 04:55:00.010 [INFO][4884] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31d0b6dac71 ContainerID="a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" Namespace="calico-system" Pod="calico-kube-controllers-54bb96ccb8-sppjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bb96ccb8--sppjg-eth0" Nov 5 04:55:00.189935 containerd[1646]: 2025-11-05 04:55:00.014 [INFO][4884] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" Namespace="calico-system" Pod="calico-kube-controllers-54bb96ccb8-sppjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bb96ccb8--sppjg-eth0" Nov 5 04:55:00.189935 containerd[1646]: 2025-11-05 04:55:00.015 [INFO][4884] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" Namespace="calico-system" Pod="calico-kube-controllers-54bb96ccb8-sppjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bb96ccb8--sppjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54bb96ccb8--sppjg-eth0", GenerateName:"calico-kube-controllers-54bb96ccb8-", Namespace:"calico-system", SelfLink:"", UID:"98e18d15-122f-4a59-81ce-7fb003c6fe97", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54bb96ccb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931", Pod:"calico-kube-controllers-54bb96ccb8-sppjg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali31d0b6dac71", MAC:"72:bf:d7:82:a9:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:55:00.189935 containerd[1646]: 2025-11-05 04:55:00.184 [INFO][4884] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" Namespace="calico-system" Pod="calico-kube-controllers-54bb96ccb8-sppjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54bb96ccb8--sppjg-eth0" Nov 5 04:55:00.217894 containerd[1646]: time="2025-11-05T04:55:00.217808831Z" level=info msg="connecting to shim a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931" address="unix:///run/containerd/s/10af077b11ff256a59c8f0c0956d006eaf4d2635667b938236b17f9e47e04ba6" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:55:00.222153 systemd-networkd[1531]: calid97746cb5d7: Gained IPv6LL Nov 5 04:55:00.252025 systemd[1]: Started cri-containerd-a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931.scope - libcontainer container a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931. Nov 5 04:55:00.270643 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:55:00.286037 systemd-networkd[1531]: cali033b53c8154: Gained IPv6LL Nov 5 04:55:00.309221 containerd[1646]: time="2025-11-05T04:55:00.306212930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54bb96ccb8-sppjg,Uid:98e18d15-122f-4a59-81ce-7fb003c6fe97,Namespace:calico-system,Attempt:0,} returns sandbox id \"a699c893a27a58a67d3f4331acf5e386d37eaea04af9dd3db61b0ac036fe8931\"" Nov 5 04:55:00.488480 containerd[1646]: time="2025-11-05T04:55:00.488316633Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:00.712075 containerd[1646]: time="2025-11-05T04:55:00.711990974Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 04:55:00.712612 containerd[1646]: time="2025-11-05T04:55:00.712087666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:00.712662 kubelet[2853]: E1105 04:55:00.712343 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:55:00.712662 kubelet[2853]: E1105 04:55:00.712395 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:55:00.712745 kubelet[2853]: E1105 04:55:00.712610 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5s56m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-chq6c_calico-system(2c81581c-25f7-472a-9631-e6c9dfccb268): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:00.713270 containerd[1646]: time="2025-11-05T04:55:00.713223366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 04:55:00.714435 kubelet[2853]: E1105 04:55:00.714379 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-chq6c" podUID="2c81581c-25f7-472a-9631-e6c9dfccb268" Nov 5 04:55:00.930208 kubelet[2853]: E1105 04:55:00.930149 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtqjl" podUID="bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c" Nov 5 04:55:00.931216 kubelet[2853]: E1105 04:55:00.931159 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-chq6c" podUID="2c81581c-25f7-472a-9631-e6c9dfccb268" Nov 5 04:55:01.160584 containerd[1646]: time="2025-11-05T04:55:01.160521694Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:01.161890 containerd[1646]: time="2025-11-05T04:55:01.161815642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 04:55:01.162058 containerd[1646]: time="2025-11-05T04:55:01.161886525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:01.162206 kubelet[2853]: E1105 04:55:01.162146 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:55:01.162261 kubelet[2853]: E1105 04:55:01.162220 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:55:01.162650 kubelet[2853]: E1105 04:55:01.162496 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mnn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-56c77d6c74-6bvgx_calico-system(26f1612f-9150-497a-bbb2-ec9dde0e0a53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:01.162769 containerd[1646]: time="2025-11-05T04:55:01.162690702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 04:55:01.163989 kubelet[2853]: E1105 04:55:01.163894 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56c77d6c74-6bvgx" podUID="26f1612f-9150-497a-bbb2-ec9dde0e0a53" Nov 5 04:55:01.487066 containerd[1646]: time="2025-11-05T04:55:01.486981673Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:01.593658 containerd[1646]: time="2025-11-05T04:55:01.593584537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 04:55:01.593730 containerd[1646]: time="2025-11-05T04:55:01.593648708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:01.594005 kubelet[2853]: E1105 04:55:01.593923 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:55:01.594005 kubelet[2853]: E1105 04:55:01.593974 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:55:01.594318 kubelet[2853]: E1105 04:55:01.594121 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b6dxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54bb96ccb8-sppjg_calico-system(98e18d15-122f-4a59-81ce-7fb003c6fe97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:01.595657 kubelet[2853]: E1105 04:55:01.595563 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bb96ccb8-sppjg" podUID="98e18d15-122f-4a59-81ce-7fb003c6fe97" Nov 5 04:55:01.694084 systemd-networkd[1531]: cali31d0b6dac71: Gained IPv6LL Nov 5 04:55:01.932936 kubelet[2853]: E1105 04:55:01.932804 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bb96ccb8-sppjg" podUID="98e18d15-122f-4a59-81ce-7fb003c6fe97" Nov 5 04:55:01.933360 kubelet[2853]: E1105 04:55:01.933083 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56c77d6c74-6bvgx" podUID="26f1612f-9150-497a-bbb2-ec9dde0e0a53" Nov 5 04:55:03.984840 systemd[1]: Started sshd@9-10.0.0.99:22-10.0.0.1:48596.service - OpenSSH per-connection server daemon (10.0.0.1:48596). Nov 5 04:55:04.067218 sshd[4978]: Accepted publickey for core from 10.0.0.1 port 48596 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:04.068770 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:04.073970 systemd-logind[1631]: New session 10 of user core. Nov 5 04:55:04.079011 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 04:55:04.197523 sshd[4981]: Connection closed by 10.0.0.1 port 48596 Nov 5 04:55:04.197840 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:04.203421 systemd[1]: sshd@9-10.0.0.99:22-10.0.0.1:48596.service: Deactivated successfully. Nov 5 04:55:04.205965 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 04:55:04.207086 systemd-logind[1631]: Session 10 logged out. Waiting for processes to exit. Nov 5 04:55:04.208426 systemd-logind[1631]: Removed session 10. Nov 5 04:55:07.224505 containerd[1646]: time="2025-11-05T04:55:07.224396725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68789659cc-swcqh,Uid:963b2307-a381-4c52-97eb-b8c873c4eef3,Namespace:calico-apiserver,Attempt:0,}" Nov 5 04:55:07.322263 systemd-networkd[1531]: calib5ec043337f: Link UP Nov 5 04:55:07.322835 systemd-networkd[1531]: calib5ec043337f: Gained carrier Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.258 [INFO][4997] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68789659cc--swcqh-eth0 calico-apiserver-68789659cc- calico-apiserver 963b2307-a381-4c52-97eb-b8c873c4eef3 851 0 2025-11-05 04:54:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68789659cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68789659cc-swcqh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib5ec043337f [] [] }} ContainerID="0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-swcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--swcqh-" Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.259 [INFO][4997] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-swcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--swcqh-eth0" Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.285 [INFO][5012] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" HandleID="k8s-pod-network.0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" Workload="localhost-k8s-calico--apiserver--68789659cc--swcqh-eth0" Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.286 [INFO][5012] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" HandleID="k8s-pod-network.0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" Workload="localhost-k8s-calico--apiserver--68789659cc--swcqh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68789659cc-swcqh", "timestamp":"2025-11-05 04:55:07.285986173 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.286 [INFO][5012] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.286 [INFO][5012] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.286 [INFO][5012] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.292 [INFO][5012] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" host="localhost" Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.296 [INFO][5012] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.301 [INFO][5012] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.303 [INFO][5012] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.305 [INFO][5012] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.305 [INFO][5012] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" host="localhost" Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.306 [INFO][5012] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4 Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.310 [INFO][5012] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" host="localhost" Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.315 [INFO][5012] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" host="localhost" Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.315 [INFO][5012] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" host="localhost" Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.315 [INFO][5012] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:55:07.338740 containerd[1646]: 2025-11-05 04:55:07.315 [INFO][5012] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" HandleID="k8s-pod-network.0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" Workload="localhost-k8s-calico--apiserver--68789659cc--swcqh-eth0" Nov 5 04:55:07.339495 containerd[1646]: 2025-11-05 04:55:07.319 [INFO][4997] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-swcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--swcqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68789659cc--swcqh-eth0", GenerateName:"calico-apiserver-68789659cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"963b2307-a381-4c52-97eb-b8c873c4eef3", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68789659cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68789659cc-swcqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5ec043337f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:55:07.339495 containerd[1646]: 2025-11-05 04:55:07.319 [INFO][4997] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-swcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--swcqh-eth0" Nov 5 04:55:07.339495 containerd[1646]: 2025-11-05 04:55:07.319 [INFO][4997] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5ec043337f ContainerID="0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-swcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--swcqh-eth0" Nov 5 04:55:07.339495 containerd[1646]: 2025-11-05 04:55:07.322 [INFO][4997] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-swcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--swcqh-eth0" Nov 5 04:55:07.339495 containerd[1646]: 2025-11-05 04:55:07.325 [INFO][4997] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-swcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--swcqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68789659cc--swcqh-eth0", GenerateName:"calico-apiserver-68789659cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"963b2307-a381-4c52-97eb-b8c873c4eef3", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68789659cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4", Pod:"calico-apiserver-68789659cc-swcqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5ec043337f", MAC:"6e:a5:56:67:25:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:55:07.339495 containerd[1646]: 2025-11-05 04:55:07.334 [INFO][4997] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" Namespace="calico-apiserver" Pod="calico-apiserver-68789659cc-swcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--68789659cc--swcqh-eth0" Nov 5 04:55:07.363695 containerd[1646]: time="2025-11-05T04:55:07.363647113Z" level=info msg="connecting to shim 0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4" address="unix:///run/containerd/s/89315bf7beea8ad6385f1a32280e36a5544b7c1a59e312ccf1f9021f1c5c6392" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:55:07.390025 systemd[1]: Started cri-containerd-0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4.scope - libcontainer container 0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4. Nov 5 04:55:07.404093 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:55:07.435464 containerd[1646]: time="2025-11-05T04:55:07.435412412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68789659cc-swcqh,Uid:963b2307-a381-4c52-97eb-b8c873c4eef3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0a3ded62f050fe47ec0a00bd128bbcfc301111cbd677acc40389ca5eea6e35c4\"" Nov 5 04:55:07.437055 containerd[1646]: time="2025-11-05T04:55:07.436999205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:55:07.765472 containerd[1646]: time="2025-11-05T04:55:07.765384898Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:07.786827 containerd[1646]: time="2025-11-05T04:55:07.786624204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:55:07.786827 containerd[1646]: time="2025-11-05T04:55:07.786702705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:07.787079 kubelet[2853]: E1105 04:55:07.786966 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:55:07.787079 kubelet[2853]: E1105 04:55:07.787025 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:55:07.787597 kubelet[2853]: E1105 04:55:07.787185 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6v75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68789659cc-swcqh_calico-apiserver(963b2307-a381-4c52-97eb-b8c873c4eef3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:07.788339 kubelet[2853]: E1105 04:55:07.788299 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68789659cc-swcqh" podUID="963b2307-a381-4c52-97eb-b8c873c4eef3" Nov 5 04:55:07.944233 kubelet[2853]: E1105 04:55:07.944176 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68789659cc-swcqh" podUID="963b2307-a381-4c52-97eb-b8c873c4eef3" Nov 5 04:55:08.926072 systemd-networkd[1531]: calib5ec043337f: Gained IPv6LL Nov 5 04:55:08.946847 kubelet[2853]: E1105 04:55:08.946785 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68789659cc-swcqh" podUID="963b2307-a381-4c52-97eb-b8c873c4eef3" Nov 5 04:55:09.216956 systemd[1]: Started sshd@10-10.0.0.99:22-10.0.0.1:48610.service - OpenSSH per-connection server daemon (10.0.0.1:48610). Nov 5 04:55:09.300767 sshd[5085]: Accepted publickey for core from 10.0.0.1 port 48610 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:09.302545 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:09.307449 systemd-logind[1631]: New session 11 of user core. Nov 5 04:55:09.311998 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 04:55:09.417432 sshd[5090]: Connection closed by 10.0.0.1 port 48610 Nov 5 04:55:09.417776 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:09.426505 systemd[1]: sshd@10-10.0.0.99:22-10.0.0.1:48610.service: Deactivated successfully. Nov 5 04:55:09.428624 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 04:55:09.429423 systemd-logind[1631]: Session 11 logged out. Waiting for processes to exit. Nov 5 04:55:09.430533 systemd-logind[1631]: Removed session 11. Nov 5 04:55:12.225098 containerd[1646]: time="2025-11-05T04:55:12.225043281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:55:12.541700 containerd[1646]: time="2025-11-05T04:55:12.541608033Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:12.542982 containerd[1646]: time="2025-11-05T04:55:12.542922671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:55:12.543043 containerd[1646]: time="2025-11-05T04:55:12.542964191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:12.543257 kubelet[2853]: E1105 04:55:12.543168 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:55:12.543257 kubelet[2853]: E1105 04:55:12.543236 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:55:12.543828 kubelet[2853]: E1105 04:55:12.543463 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8h4xs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68789659cc-mdcdc_calico-apiserver(bde487fb-2c18-4dab-a763-3054070918ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:12.544047 containerd[1646]: time="2025-11-05T04:55:12.543562171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 04:55:12.545302 kubelet[2853]: E1105 04:55:12.545257 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68789659cc-mdcdc" podUID="bde487fb-2c18-4dab-a763-3054070918ea" Nov 5 04:55:12.871681 containerd[1646]: time="2025-11-05T04:55:12.871522897Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:12.872971 containerd[1646]: time="2025-11-05T04:55:12.872910947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 04:55:12.873393 containerd[1646]: time="2025-11-05T04:55:12.873013985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:12.873449 kubelet[2853]: E1105 04:55:12.873158 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:55:12.873449 kubelet[2853]: E1105 04:55:12.873206 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:55:12.873449 kubelet[2853]: E1105 04:55:12.873343 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnjf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vtqjl_calico-system(bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:12.875419 kubelet[2853]: E1105 04:55:12.875374 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtqjl" podUID="bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c" Nov 5 04:55:13.225855 containerd[1646]: time="2025-11-05T04:55:13.225609728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 04:55:13.559049 containerd[1646]: time="2025-11-05T04:55:13.558965290Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:13.581626 containerd[1646]: time="2025-11-05T04:55:13.581570543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 04:55:13.581705 containerd[1646]: time="2025-11-05T04:55:13.581658602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:13.581868 kubelet[2853]: E1105 04:55:13.581809 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:55:13.582291 kubelet[2853]: E1105 04:55:13.581907 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:55:13.582435 containerd[1646]: time="2025-11-05T04:55:13.582224479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 04:55:13.583077 kubelet[2853]: E1105 04:55:13.582181 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5s56m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-chq6c_calico-system(2c81581c-25f7-472a-9631-e6c9dfccb268): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:13.934982 containerd[1646]: time="2025-11-05T04:55:13.934797984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:14.060932 containerd[1646]: time="2025-11-05T04:55:14.060832963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 04:55:14.061101 containerd[1646]: time="2025-11-05T04:55:14.060917175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:14.061216 kubelet[2853]: E1105 04:55:14.061146 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:55:14.061274 kubelet[2853]: E1105 04:55:14.061228 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:55:14.061596 containerd[1646]: time="2025-11-05T04:55:14.061539930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 04:55:14.061777 kubelet[2853]: E1105 04:55:14.061586 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b6dxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54bb96ccb8-sppjg_calico-system(98e18d15-122f-4a59-81ce-7fb003c6fe97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:14.062968 kubelet[2853]: E1105 04:55:14.062924 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bb96ccb8-sppjg" podUID="98e18d15-122f-4a59-81ce-7fb003c6fe97" Nov 5 04:55:14.431521 systemd[1]: Started sshd@11-10.0.0.99:22-10.0.0.1:47998.service - OpenSSH per-connection server daemon (10.0.0.1:47998). Nov 5 04:55:14.498003 containerd[1646]: time="2025-11-05T04:55:14.497924739Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:14.499065 sshd[5106]: Accepted publickey for core from 10.0.0.1 port 47998 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:14.499367 containerd[1646]: time="2025-11-05T04:55:14.499317745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 04:55:14.499415 containerd[1646]: time="2025-11-05T04:55:14.499406475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:14.499647 kubelet[2853]: E1105 04:55:14.499599 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:55:14.499728 kubelet[2853]: E1105 04:55:14.499651 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:55:14.500140 kubelet[2853]: E1105 04:55:14.499982 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a6af8f491a9646caa00c8d37bbf00fa6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mnn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-56c77d6c74-6bvgx_calico-system(26f1612f-9150-497a-bbb2-ec9dde0e0a53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:14.500362 containerd[1646]: time="2025-11-05T04:55:14.500076973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 04:55:14.501274 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:14.506777 systemd-logind[1631]: New session 12 of user core. Nov 5 04:55:14.516010 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 04:55:14.599607 sshd[5109]: Connection closed by 10.0.0.1 port 47998 Nov 5 04:55:14.599954 sshd-session[5106]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:14.604596 systemd[1]: sshd@11-10.0.0.99:22-10.0.0.1:47998.service: Deactivated successfully. Nov 5 04:55:14.606756 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 04:55:14.607534 systemd-logind[1631]: Session 12 logged out. Waiting for processes to exit. Nov 5 04:55:14.608641 systemd-logind[1631]: Removed session 12. Nov 5 04:55:14.859019 containerd[1646]: time="2025-11-05T04:55:14.858924267Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:14.860665 containerd[1646]: time="2025-11-05T04:55:14.860563135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 04:55:14.860665 containerd[1646]: time="2025-11-05T04:55:14.860630084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:14.861252 kubelet[2853]: E1105 04:55:14.861209 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:55:14.862055 kubelet[2853]: E1105 04:55:14.861268 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:55:14.862055 kubelet[2853]: E1105 04:55:14.861590 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5s56m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-chq6c_calico-system(2c81581c-25f7-472a-9631-e6c9dfccb268): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:14.862340 containerd[1646]: time="2025-11-05T04:55:14.861945630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 04:55:14.863067 kubelet[2853]: E1105 04:55:14.863023 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-chq6c" podUID="2c81581c-25f7-472a-9631-e6c9dfccb268" Nov 5 04:55:15.207540 containerd[1646]: time="2025-11-05T04:55:15.207379843Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:15.283062 containerd[1646]: time="2025-11-05T04:55:15.282995135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:15.283062 containerd[1646]: time="2025-11-05T04:55:15.283019722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 04:55:15.283225 kubelet[2853]: E1105 04:55:15.283190 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:55:15.283293 kubelet[2853]: E1105 04:55:15.283239 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:55:15.283394 kubelet[2853]: E1105 04:55:15.283359 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mnn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-56c77d6c74-6bvgx_calico-system(26f1612f-9150-497a-bbb2-ec9dde0e0a53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:15.284551 kubelet[2853]: E1105 04:55:15.284509 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56c77d6c74-6bvgx" podUID="26f1612f-9150-497a-bbb2-ec9dde0e0a53" Nov 5 04:55:19.613032 systemd[1]: Started sshd@12-10.0.0.99:22-10.0.0.1:48010.service - OpenSSH per-connection server daemon (10.0.0.1:48010). Nov 5 04:55:19.667659 sshd[5133]: Accepted publickey for core from 10.0.0.1 port 48010 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:19.669104 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:19.673623 systemd-logind[1631]: New session 13 of user core. Nov 5 04:55:19.684007 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 04:55:19.760918 sshd[5136]: Connection closed by 10.0.0.1 port 48010 Nov 5 04:55:19.761247 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:19.769665 systemd[1]: sshd@12-10.0.0.99:22-10.0.0.1:48010.service: Deactivated successfully. Nov 5 04:55:19.771573 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 04:55:19.772506 systemd-logind[1631]: Session 13 logged out. Waiting for processes to exit. Nov 5 04:55:19.775529 systemd[1]: Started sshd@13-10.0.0.99:22-10.0.0.1:48014.service - OpenSSH per-connection server daemon (10.0.0.1:48014). Nov 5 04:55:19.776407 systemd-logind[1631]: Removed session 13. Nov 5 04:55:19.833235 sshd[5151]: Accepted publickey for core from 10.0.0.1 port 48014 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:19.834594 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:19.839441 systemd-logind[1631]: New session 14 of user core. Nov 5 04:55:19.849989 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 04:55:19.960248 sshd[5154]: Connection closed by 10.0.0.1 port 48014 Nov 5 04:55:19.962467 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:19.974990 systemd[1]: sshd@13-10.0.0.99:22-10.0.0.1:48014.service: Deactivated successfully. Nov 5 04:55:19.977766 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 04:55:19.979952 systemd-logind[1631]: Session 14 logged out. Waiting for processes to exit. Nov 5 04:55:19.983710 systemd[1]: Started sshd@14-10.0.0.99:22-10.0.0.1:48028.service - OpenSSH per-connection server daemon (10.0.0.1:48028). Nov 5 04:55:19.985055 systemd-logind[1631]: Removed session 14. Nov 5 04:55:20.043268 sshd[5166]: Accepted publickey for core from 10.0.0.1 port 48028 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:20.044763 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:20.049458 systemd-logind[1631]: New session 15 of user core. Nov 5 04:55:20.064017 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 04:55:20.145004 sshd[5169]: Connection closed by 10.0.0.1 port 48028 Nov 5 04:55:20.145326 sshd-session[5166]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:20.149024 systemd[1]: sshd@14-10.0.0.99:22-10.0.0.1:48028.service: Deactivated successfully. Nov 5 04:55:20.151615 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 04:55:20.153167 systemd-logind[1631]: Session 15 logged out. Waiting for processes to exit. Nov 5 04:55:20.154821 systemd-logind[1631]: Removed session 15. Nov 5 04:55:22.223899 kubelet[2853]: E1105 04:55:22.223812 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:55:22.225018 containerd[1646]: time="2025-11-05T04:55:22.224833420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:55:22.580301 containerd[1646]: time="2025-11-05T04:55:22.580219924Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:22.581667 containerd[1646]: time="2025-11-05T04:55:22.581587298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:55:22.581725 containerd[1646]: time="2025-11-05T04:55:22.581674344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:22.582057 kubelet[2853]: E1105 04:55:22.581990 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:55:22.582132 kubelet[2853]: E1105 04:55:22.582063 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:55:22.582281 kubelet[2853]: E1105 04:55:22.582221 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6v75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68789659cc-swcqh_calico-apiserver(963b2307-a381-4c52-97eb-b8c873c4eef3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:22.583460 kubelet[2853]: E1105 04:55:22.583421 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68789659cc-swcqh" podUID="963b2307-a381-4c52-97eb-b8c873c4eef3" Nov 5 04:55:23.225661 kubelet[2853]: E1105 04:55:23.225582 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68789659cc-mdcdc" podUID="bde487fb-2c18-4dab-a763-3054070918ea" Nov 5 04:55:24.225033 kubelet[2853]: E1105 04:55:24.224961 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtqjl" podUID="bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c" Nov 5 04:55:25.164388 systemd[1]: Started sshd@15-10.0.0.99:22-10.0.0.1:57700.service - OpenSSH per-connection server daemon (10.0.0.1:57700). Nov 5 04:55:25.226473 sshd[5182]: Accepted publickey for core from 10.0.0.1 port 57700 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:25.229110 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:25.233987 systemd-logind[1631]: New session 16 of user core. Nov 5 04:55:25.244211 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 04:55:25.329571 sshd[5185]: Connection closed by 10.0.0.1 port 57700 Nov 5 04:55:25.330011 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:25.335442 systemd[1]: sshd@15-10.0.0.99:22-10.0.0.1:57700.service: Deactivated successfully. Nov 5 04:55:25.337987 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 04:55:25.339209 systemd-logind[1631]: Session 16 logged out. Waiting for processes to exit. Nov 5 04:55:25.340678 systemd-logind[1631]: Removed session 16. Nov 5 04:55:27.010047 kubelet[2853]: E1105 04:55:27.009997 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:55:27.224340 kubelet[2853]: E1105 04:55:27.224293 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:55:27.224965 kubelet[2853]: E1105 04:55:27.224937 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bb96ccb8-sppjg" podUID="98e18d15-122f-4a59-81ce-7fb003c6fe97" Nov 5 04:55:28.225015 kubelet[2853]: E1105 04:55:28.224953 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56c77d6c74-6bvgx" podUID="26f1612f-9150-497a-bbb2-ec9dde0e0a53" Nov 5 04:55:28.226005 kubelet[2853]: E1105 04:55:28.225032 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-chq6c" podUID="2c81581c-25f7-472a-9631-e6c9dfccb268" Nov 5 04:55:30.345587 systemd[1]: Started sshd@16-10.0.0.99:22-10.0.0.1:57712.service - OpenSSH per-connection server daemon (10.0.0.1:57712). Nov 5 04:55:30.424953 sshd[5232]: Accepted publickey for core from 10.0.0.1 port 57712 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:30.427940 sshd-session[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:30.434136 systemd-logind[1631]: New session 17 of user core. Nov 5 04:55:30.440334 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 04:55:30.546886 sshd[5235]: Connection closed by 10.0.0.1 port 57712 Nov 5 04:55:30.547246 sshd-session[5232]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:30.552323 systemd[1]: sshd@16-10.0.0.99:22-10.0.0.1:57712.service: Deactivated successfully. Nov 5 04:55:30.554613 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 04:55:30.555411 systemd-logind[1631]: Session 17 logged out. Waiting for processes to exit. Nov 5 04:55:30.556581 systemd-logind[1631]: Removed session 17. Nov 5 04:55:34.225253 kubelet[2853]: E1105 04:55:34.225196 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68789659cc-swcqh" podUID="963b2307-a381-4c52-97eb-b8c873c4eef3" Nov 5 04:55:35.225533 containerd[1646]: time="2025-11-05T04:55:35.225249409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 04:55:35.563579 systemd[1]: Started sshd@17-10.0.0.99:22-10.0.0.1:50726.service - OpenSSH per-connection server daemon (10.0.0.1:50726). Nov 5 04:55:35.564768 containerd[1646]: time="2025-11-05T04:55:35.564701713Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:35.566122 containerd[1646]: time="2025-11-05T04:55:35.566048374Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 04:55:35.566193 containerd[1646]: time="2025-11-05T04:55:35.566133126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:35.566359 kubelet[2853]: E1105 04:55:35.566313 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:55:35.566713 kubelet[2853]: E1105 04:55:35.566374 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:55:35.566713 kubelet[2853]: E1105 04:55:35.566570 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnjf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vtqjl_calico-system(bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:35.567901 kubelet[2853]: E1105 04:55:35.567827 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtqjl" podUID="bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c" Nov 5 04:55:35.628238 sshd[5248]: Accepted publickey for core from 10.0.0.1 port 50726 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:35.630027 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:35.634533 systemd-logind[1631]: New session 18 of user core. Nov 5 04:55:35.642992 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 04:55:35.723316 sshd[5251]: Connection closed by 10.0.0.1 port 50726 Nov 5 04:55:35.723627 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:35.728529 systemd[1]: sshd@17-10.0.0.99:22-10.0.0.1:50726.service: Deactivated successfully. Nov 5 04:55:35.730629 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 04:55:35.731451 systemd-logind[1631]: Session 18 logged out. Waiting for processes to exit. Nov 5 04:55:35.732722 systemd-logind[1631]: Removed session 18. Nov 5 04:55:36.224024 kubelet[2853]: E1105 04:55:36.223970 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:55:38.225336 containerd[1646]: time="2025-11-05T04:55:38.225277264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:55:38.552786 containerd[1646]: time="2025-11-05T04:55:38.552721611Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:38.655656 containerd[1646]: time="2025-11-05T04:55:38.655591005Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:55:38.655656 containerd[1646]: time="2025-11-05T04:55:38.655633886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:38.655904 kubelet[2853]: E1105 04:55:38.655832 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:55:38.656284 kubelet[2853]: E1105 04:55:38.655914 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:55:38.656284 kubelet[2853]: E1105 04:55:38.656094 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8h4xs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68789659cc-mdcdc_calico-apiserver(bde487fb-2c18-4dab-a763-3054070918ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:38.658044 kubelet[2853]: E1105 04:55:38.658007 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68789659cc-mdcdc" podUID="bde487fb-2c18-4dab-a763-3054070918ea" Nov 5 04:55:40.225613 containerd[1646]: time="2025-11-05T04:55:40.225543679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 04:55:40.551947 containerd[1646]: time="2025-11-05T04:55:40.551884187Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:40.669128 containerd[1646]: time="2025-11-05T04:55:40.669023713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 04:55:40.669253 containerd[1646]: time="2025-11-05T04:55:40.669089797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:40.669469 kubelet[2853]: E1105 04:55:40.669395 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:55:40.669896 kubelet[2853]: E1105 04:55:40.669471 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:55:40.669896 kubelet[2853]: E1105 04:55:40.669709 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5s56m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-chq6c_calico-system(2c81581c-25f7-472a-9631-e6c9dfccb268): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:40.671800 containerd[1646]: time="2025-11-05T04:55:40.671752043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 04:55:40.736433 systemd[1]: Started sshd@18-10.0.0.99:22-10.0.0.1:50742.service - OpenSSH per-connection server daemon (10.0.0.1:50742). Nov 5 04:55:40.798772 sshd[5272]: Accepted publickey for core from 10.0.0.1 port 50742 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:40.800523 sshd-session[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:40.805832 systemd-logind[1631]: New session 19 of user core. Nov 5 04:55:40.815138 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 04:55:40.897237 sshd[5275]: Connection closed by 10.0.0.1 port 50742 Nov 5 04:55:40.897636 sshd-session[5272]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:40.902739 systemd[1]: sshd@18-10.0.0.99:22-10.0.0.1:50742.service: Deactivated successfully. Nov 5 04:55:40.905036 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 04:55:40.906005 systemd-logind[1631]: Session 19 logged out. Waiting for processes to exit. Nov 5 04:55:40.907443 systemd-logind[1631]: Removed session 19. Nov 5 04:55:41.028677 containerd[1646]: time="2025-11-05T04:55:41.028610614Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:41.030007 containerd[1646]: time="2025-11-05T04:55:41.029927874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 04:55:41.030093 containerd[1646]: time="2025-11-05T04:55:41.030005331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:41.030239 kubelet[2853]: E1105 04:55:41.030180 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:55:41.030298 kubelet[2853]: E1105 04:55:41.030240 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:55:41.030458 kubelet[2853]: E1105 04:55:41.030394 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5s56m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-chq6c_calico-system(2c81581c-25f7-472a-9631-e6c9dfccb268): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:41.031680 kubelet[2853]: E1105 04:55:41.031597 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-chq6c" podUID="2c81581c-25f7-472a-9631-e6c9dfccb268" Nov 5 04:55:41.225442 containerd[1646]: time="2025-11-05T04:55:41.225217387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 04:55:41.530805 containerd[1646]: time="2025-11-05T04:55:41.530737549Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:41.532137 containerd[1646]: time="2025-11-05T04:55:41.532089946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 04:55:41.532204 containerd[1646]: time="2025-11-05T04:55:41.532176380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:41.532410 kubelet[2853]: E1105 04:55:41.532352 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:55:41.532478 kubelet[2853]: E1105 04:55:41.532415 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:55:41.532647 kubelet[2853]: E1105 04:55:41.532579 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b6dxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54bb96ccb8-sppjg_calico-system(98e18d15-122f-4a59-81ce-7fb003c6fe97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:41.533823 kubelet[2853]: E1105 04:55:41.533765 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bb96ccb8-sppjg" podUID="98e18d15-122f-4a59-81ce-7fb003c6fe97" Nov 5 04:55:42.224880 containerd[1646]: time="2025-11-05T04:55:42.224797321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 04:55:42.541622 containerd[1646]: time="2025-11-05T04:55:42.541534554Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:42.598325 containerd[1646]: time="2025-11-05T04:55:42.598267610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:42.598469 containerd[1646]: time="2025-11-05T04:55:42.598333455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 04:55:42.598646 kubelet[2853]: E1105 04:55:42.598591 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:55:42.599070 kubelet[2853]: E1105 04:55:42.598650 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:55:42.599070 kubelet[2853]: E1105 04:55:42.598768 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a6af8f491a9646caa00c8d37bbf00fa6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mnn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-56c77d6c74-6bvgx_calico-system(26f1612f-9150-497a-bbb2-ec9dde0e0a53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:42.601098 containerd[1646]: time="2025-11-05T04:55:42.601044890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 04:55:43.018549 containerd[1646]: time="2025-11-05T04:55:43.018515532Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:43.128339 containerd[1646]: time="2025-11-05T04:55:43.128270216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 04:55:43.128458 containerd[1646]: time="2025-11-05T04:55:43.128328988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:43.128603 kubelet[2853]: E1105 04:55:43.128542 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:55:43.128672 kubelet[2853]: E1105 04:55:43.128614 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:55:43.128784 kubelet[2853]: E1105 04:55:43.128748 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mnn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-56c77d6c74-6bvgx_calico-system(26f1612f-9150-497a-bbb2-ec9dde0e0a53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:43.129980 kubelet[2853]: E1105 04:55:43.129933 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56c77d6c74-6bvgx" podUID="26f1612f-9150-497a-bbb2-ec9dde0e0a53" Nov 5 04:55:44.224673 kubelet[2853]: E1105 04:55:44.224599 2853 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:55:45.911086 systemd[1]: Started sshd@19-10.0.0.99:22-10.0.0.1:55630.service - OpenSSH per-connection server daemon (10.0.0.1:55630). Nov 5 04:55:45.971680 sshd[5288]: Accepted publickey for core from 10.0.0.1 port 55630 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:45.973181 sshd-session[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:45.977689 systemd-logind[1631]: New session 20 of user core. Nov 5 04:55:45.990100 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 04:55:46.082356 sshd[5291]: Connection closed by 10.0.0.1 port 55630 Nov 5 04:55:46.082851 sshd-session[5288]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:46.096641 systemd[1]: sshd@19-10.0.0.99:22-10.0.0.1:55630.service: Deactivated successfully. Nov 5 04:55:46.098793 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 04:55:46.099654 systemd-logind[1631]: Session 20 logged out. Waiting for processes to exit. Nov 5 04:55:46.103082 systemd[1]: Started sshd@20-10.0.0.99:22-10.0.0.1:55634.service - OpenSSH per-connection server daemon (10.0.0.1:55634). Nov 5 04:55:46.103773 systemd-logind[1631]: Removed session 20. Nov 5 04:55:46.162981 sshd[5304]: Accepted publickey for core from 10.0.0.1 port 55634 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:46.164485 sshd-session[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:46.168836 systemd-logind[1631]: New session 21 of user core. Nov 5 04:55:46.181996 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 04:55:46.517882 sshd[5307]: Connection closed by 10.0.0.1 port 55634 Nov 5 04:55:46.518408 sshd-session[5304]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:46.531586 systemd[1]: sshd@20-10.0.0.99:22-10.0.0.1:55634.service: Deactivated successfully. Nov 5 04:55:46.533500 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 04:55:46.534305 systemd-logind[1631]: Session 21 logged out. Waiting for processes to exit. Nov 5 04:55:46.536880 systemd[1]: Started sshd@21-10.0.0.99:22-10.0.0.1:55650.service - OpenSSH per-connection server daemon (10.0.0.1:55650). Nov 5 04:55:46.537931 systemd-logind[1631]: Removed session 21. Nov 5 04:55:46.596507 sshd[5320]: Accepted publickey for core from 10.0.0.1 port 55650 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:46.597976 sshd-session[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:46.602280 systemd-logind[1631]: New session 22 of user core. Nov 5 04:55:46.611990 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 04:55:47.093954 sshd[5323]: Connection closed by 10.0.0.1 port 55650 Nov 5 04:55:47.092368 sshd-session[5320]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:47.104123 systemd[1]: sshd@21-10.0.0.99:22-10.0.0.1:55650.service: Deactivated successfully. Nov 5 04:55:47.107569 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 04:55:47.111583 systemd-logind[1631]: Session 22 logged out. Waiting for processes to exit. Nov 5 04:55:47.121241 systemd[1]: Started sshd@22-10.0.0.99:22-10.0.0.1:55660.service - OpenSSH per-connection server daemon (10.0.0.1:55660). Nov 5 04:55:47.125114 systemd-logind[1631]: Removed session 22. Nov 5 04:55:47.179404 sshd[5344]: Accepted publickey for core from 10.0.0.1 port 55660 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:47.181197 sshd-session[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:47.186070 systemd-logind[1631]: New session 23 of user core. Nov 5 04:55:47.201053 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 04:55:47.227797 containerd[1646]: time="2025-11-05T04:55:47.227703319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:55:47.384754 sshd[5349]: Connection closed by 10.0.0.1 port 55660 Nov 5 04:55:47.386115 sshd-session[5344]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:47.398604 systemd[1]: sshd@22-10.0.0.99:22-10.0.0.1:55660.service: Deactivated successfully. Nov 5 04:55:47.401080 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 04:55:47.401981 systemd-logind[1631]: Session 23 logged out. Waiting for processes to exit. Nov 5 04:55:47.404649 systemd[1]: Started sshd@23-10.0.0.99:22-10.0.0.1:55662.service - OpenSSH per-connection server daemon (10.0.0.1:55662). Nov 5 04:55:47.405533 systemd-logind[1631]: Removed session 23. Nov 5 04:55:47.464721 sshd[5363]: Accepted publickey for core from 10.0.0.1 port 55662 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:47.466608 sshd-session[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:47.473196 systemd-logind[1631]: New session 24 of user core. Nov 5 04:55:47.482076 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 04:55:47.564681 sshd[5366]: Connection closed by 10.0.0.1 port 55662 Nov 5 04:55:47.565043 sshd-session[5363]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:47.569483 systemd[1]: sshd@23-10.0.0.99:22-10.0.0.1:55662.service: Deactivated successfully. Nov 5 04:55:47.571948 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 04:55:47.573036 systemd-logind[1631]: Session 24 logged out. Waiting for processes to exit. Nov 5 04:55:47.574592 systemd-logind[1631]: Removed session 24. Nov 5 04:55:47.591434 containerd[1646]: time="2025-11-05T04:55:47.591380123Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:55:47.718014 containerd[1646]: time="2025-11-05T04:55:47.717806946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:55:47.718014 containerd[1646]: time="2025-11-05T04:55:47.717874855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:55:47.718323 kubelet[2853]: E1105 04:55:47.718045 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:55:47.718323 kubelet[2853]: E1105 04:55:47.718137 2853 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:55:47.718771 kubelet[2853]: E1105 04:55:47.718341 2853 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6v75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68789659cc-swcqh_calico-apiserver(963b2307-a381-4c52-97eb-b8c873c4eef3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:55:47.720312 kubelet[2853]: E1105 04:55:47.720271 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68789659cc-swcqh" podUID="963b2307-a381-4c52-97eb-b8c873c4eef3" Nov 5 04:55:48.225332 kubelet[2853]: E1105 04:55:48.225277 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtqjl" podUID="bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c" Nov 5 04:55:49.225340 kubelet[2853]: E1105 04:55:49.225290 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68789659cc-mdcdc" podUID="bde487fb-2c18-4dab-a763-3054070918ea" Nov 5 04:55:52.589100 systemd[1]: Started sshd@24-10.0.0.99:22-10.0.0.1:55670.service - OpenSSH per-connection server daemon (10.0.0.1:55670). Nov 5 04:55:52.635488 sshd[5380]: Accepted publickey for core from 10.0.0.1 port 55670 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:52.637308 sshd-session[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:52.642509 systemd-logind[1631]: New session 25 of user core. Nov 5 04:55:52.653058 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 04:55:52.735818 sshd[5383]: Connection closed by 10.0.0.1 port 55670 Nov 5 04:55:52.736212 sshd-session[5380]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:52.740977 systemd[1]: sshd@24-10.0.0.99:22-10.0.0.1:55670.service: Deactivated successfully. Nov 5 04:55:52.743057 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 04:55:52.743869 systemd-logind[1631]: Session 25 logged out. Waiting for processes to exit. Nov 5 04:55:52.745246 systemd-logind[1631]: Removed session 25. Nov 5 04:55:54.225492 kubelet[2853]: E1105 04:55:54.225421 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56c77d6c74-6bvgx" podUID="26f1612f-9150-497a-bbb2-ec9dde0e0a53" Nov 5 04:55:55.225727 kubelet[2853]: E1105 04:55:55.225662 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bb96ccb8-sppjg" podUID="98e18d15-122f-4a59-81ce-7fb003c6fe97" Nov 5 04:55:55.226502 kubelet[2853]: E1105 04:55:55.226034 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-chq6c" podUID="2c81581c-25f7-472a-9631-e6c9dfccb268" Nov 5 04:55:57.748453 systemd[1]: Started sshd@25-10.0.0.99:22-10.0.0.1:44616.service - OpenSSH per-connection server daemon (10.0.0.1:44616). Nov 5 04:55:57.798873 sshd[5424]: Accepted publickey for core from 10.0.0.1 port 44616 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:55:57.800248 sshd-session[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:55:57.804638 systemd-logind[1631]: New session 26 of user core. Nov 5 04:55:57.808984 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 04:55:57.878373 sshd[5427]: Connection closed by 10.0.0.1 port 44616 Nov 5 04:55:57.878681 sshd-session[5424]: pam_unix(sshd:session): session closed for user core Nov 5 04:55:57.883339 systemd[1]: sshd@25-10.0.0.99:22-10.0.0.1:44616.service: Deactivated successfully. Nov 5 04:55:57.885473 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 04:55:57.886266 systemd-logind[1631]: Session 26 logged out. Waiting for processes to exit. Nov 5 04:55:57.887502 systemd-logind[1631]: Removed session 26. Nov 5 04:56:00.225984 kubelet[2853]: E1105 04:56:00.225551 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68789659cc-mdcdc" podUID="bde487fb-2c18-4dab-a763-3054070918ea" Nov 5 04:56:02.225389 kubelet[2853]: E1105 04:56:02.225298 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68789659cc-swcqh" podUID="963b2307-a381-4c52-97eb-b8c873c4eef3" Nov 5 04:56:02.893975 systemd[1]: Started sshd@26-10.0.0.99:22-10.0.0.1:44630.service - OpenSSH per-connection server daemon (10.0.0.1:44630). Nov 5 04:56:02.999898 sshd[5440]: Accepted publickey for core from 10.0.0.1 port 44630 ssh2: RSA SHA256:XiGyK5fqllnBQWxDYED3xW8VH4cMJfuo/fZHqIgMrko Nov 5 04:56:03.007124 sshd-session[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:56:03.013545 systemd-logind[1631]: New session 27 of user core. Nov 5 04:56:03.019081 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 5 04:56:03.117289 sshd[5443]: Connection closed by 10.0.0.1 port 44630 Nov 5 04:56:03.118715 sshd-session[5440]: pam_unix(sshd:session): session closed for user core Nov 5 04:56:03.124322 systemd[1]: sshd@26-10.0.0.99:22-10.0.0.1:44630.service: Deactivated successfully. Nov 5 04:56:03.126812 systemd[1]: session-27.scope: Deactivated successfully. Nov 5 04:56:03.128070 systemd-logind[1631]: Session 27 logged out. Waiting for processes to exit. Nov 5 04:56:03.129437 systemd-logind[1631]: Removed session 27. Nov 5 04:56:03.224912 kubelet[2853]: E1105 04:56:03.224672 2853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vtqjl" podUID="bc7bc11f-b4fe-4b49-97ff-ff9c5cb4fe1c"