Oct 29 00:41:07.473830 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 28 22:31:02 -00 2025 Oct 29 00:41:07.473876 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=54ef1c344b2a47697b32f3227bd37f41d37acb1889c1eaea33b22ce408b7b3ae Oct 29 00:41:07.473886 kernel: BIOS-provided physical RAM map: Oct 29 00:41:07.473893 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 29 00:41:07.473900 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 29 00:41:07.473909 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Oct 29 00:41:07.473918 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 29 00:41:07.473943 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Oct 29 00:41:07.473950 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 29 00:41:07.473957 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 29 00:41:07.473964 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 29 00:41:07.473971 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 29 00:41:07.473978 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 29 00:41:07.473988 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 29 00:41:07.473997 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 29 00:41:07.474004 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 29 00:41:07.474012 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 29 00:41:07.474021 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 00:41:07.474029 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 29 00:41:07.474036 kernel: NX (Execute Disable) protection: active Oct 29 00:41:07.474044 kernel: APIC: Static calls initialized Oct 29 00:41:07.474051 kernel: e820: update [mem 0x9a13d018-0x9a146c57] usable ==> usable Oct 29 00:41:07.474059 kernel: e820: update [mem 0x9a100018-0x9a13ce57] usable ==> usable Oct 29 00:41:07.474067 kernel: extended physical RAM map: Oct 29 00:41:07.474074 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 29 00:41:07.474082 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 29 00:41:07.474089 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Oct 29 00:41:07.474097 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 29 00:41:07.474106 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a100017] usable Oct 29 00:41:07.474114 kernel: reserve setup_data: [mem 0x000000009a100018-0x000000009a13ce57] usable Oct 29 00:41:07.474121 kernel: reserve setup_data: [mem 0x000000009a13ce58-0x000000009a13d017] usable Oct 29 00:41:07.474129 kernel: reserve setup_data: [mem 0x000000009a13d018-0x000000009a146c57] usable Oct 29 00:41:07.474136 kernel: reserve setup_data: [mem 0x000000009a146c58-0x000000009b8ecfff] usable Oct 29 00:41:07.474144 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 29 00:41:07.474151 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 29 00:41:07.474159 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 29 00:41:07.474166 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 29 00:41:07.474174 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 29 00:41:07.474191 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 29 00:41:07.474199 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 29 00:41:07.474210 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 29 00:41:07.474218 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 29 00:41:07.474226 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 00:41:07.474236 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 29 00:41:07.474244 kernel: efi: EFI v2.7 by EDK II Oct 29 00:41:07.474252 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Oct 29 00:41:07.474260 kernel: random: crng init done Oct 29 00:41:07.474267 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Oct 29 00:41:07.474275 kernel: secureboot: Secure boot enabled Oct 29 00:41:07.474283 kernel: SMBIOS 2.8 present. Oct 29 00:41:07.474290 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 29 00:41:07.474298 kernel: DMI: Memory slots populated: 1/1 Oct 29 00:41:07.474308 kernel: Hypervisor detected: KVM Oct 29 00:41:07.474316 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Oct 29 00:41:07.474323 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 29 00:41:07.474331 kernel: kvm-clock: using sched offset of 5605656161 cycles Oct 29 00:41:07.474339 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 29 00:41:07.474348 kernel: tsc: Detected 2794.748 MHz processor Oct 29 00:41:07.474356 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 29 00:41:07.474364 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 29 00:41:07.474372 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Oct 29 00:41:07.474384 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 29 00:41:07.474395 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 29 00:41:07.474405 kernel: Using GB pages for direct mapping Oct 29 00:41:07.474416 kernel: ACPI: Early table checksum verification disabled Oct 29 00:41:07.474427 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Oct 29 00:41:07.474438 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 29 00:41:07.474450 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:41:07.474464 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:41:07.474475 kernel: ACPI: FACS 0x000000009BBDD000 000040 Oct 29 00:41:07.474485 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:41:07.474495 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:41:07.474503 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:41:07.474511 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:41:07.474519 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 29 00:41:07.474530 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Oct 29 00:41:07.474538 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Oct 29 00:41:07.474546 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Oct 29 00:41:07.474554 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Oct 29 00:41:07.474562 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Oct 29 00:41:07.474570 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Oct 29 00:41:07.474578 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Oct 29 00:41:07.474586 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Oct 29 00:41:07.474596 kernel: No NUMA configuration found Oct 29 00:41:07.474605 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Oct 29 00:41:07.474613 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Oct 29 00:41:07.474621 kernel: Zone ranges: Oct 29 00:41:07.474629 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 29 00:41:07.474637 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Oct 29 00:41:07.474645 kernel: Normal empty Oct 29 00:41:07.474655 kernel: Device empty Oct 29 00:41:07.474663 kernel: Movable zone start for each node Oct 29 00:41:07.474671 kernel: Early memory node ranges Oct 29 00:41:07.474679 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Oct 29 00:41:07.474687 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Oct 29 00:41:07.474695 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Oct 29 00:41:07.474703 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Oct 29 00:41:07.474711 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Oct 29 00:41:07.474721 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Oct 29 00:41:07.474729 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 29 00:41:07.474737 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Oct 29 00:41:07.474745 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 29 00:41:07.474753 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 29 00:41:07.474761 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 29 00:41:07.474769 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Oct 29 00:41:07.474779 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 29 00:41:07.474788 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 29 00:41:07.474796 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 29 00:41:07.474804 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 29 00:41:07.474812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 29 00:41:07.474820 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 29 00:41:07.474828 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 29 00:41:07.474838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 29 00:41:07.474846 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 29 00:41:07.474854 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 29 00:41:07.474862 kernel: TSC deadline timer available Oct 29 00:41:07.474870 kernel: CPU topo: Max. logical packages: 1 Oct 29 00:41:07.474878 kernel: CPU topo: Max. logical dies: 1 Oct 29 00:41:07.474895 kernel: CPU topo: Max. dies per package: 1 Oct 29 00:41:07.474904 kernel: CPU topo: Max. threads per core: 1 Oct 29 00:41:07.474912 kernel: CPU topo: Num. cores per package: 4 Oct 29 00:41:07.474920 kernel: CPU topo: Num. threads per package: 4 Oct 29 00:41:07.474944 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 29 00:41:07.474953 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 29 00:41:07.474961 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 29 00:41:07.474970 kernel: kvm-guest: setup PV sched yield Oct 29 00:41:07.474980 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 29 00:41:07.474989 kernel: Booting paravirtualized kernel on KVM Oct 29 00:41:07.474997 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 29 00:41:07.475006 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 29 00:41:07.475015 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 29 00:41:07.475023 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 29 00:41:07.475031 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 29 00:41:07.475039 kernel: kvm-guest: PV spinlocks enabled Oct 29 00:41:07.475050 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 29 00:41:07.475060 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=54ef1c344b2a47697b32f3227bd37f41d37acb1889c1eaea33b22ce408b7b3ae Oct 29 00:41:07.475068 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 29 00:41:07.475077 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 00:41:07.475085 kernel: Fallback order for Node 0: 0 Oct 29 00:41:07.475093 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Oct 29 00:41:07.475104 kernel: Policy zone: DMA32 Oct 29 00:41:07.475112 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 00:41:07.475120 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 29 00:41:07.475128 kernel: ftrace: allocating 40092 entries in 157 pages Oct 29 00:41:07.475137 kernel: ftrace: allocated 157 pages with 5 groups Oct 29 00:41:07.475145 kernel: Dynamic Preempt: voluntary Oct 29 00:41:07.475153 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 29 00:41:07.475164 kernel: rcu: RCU event tracing is enabled. Oct 29 00:41:07.475173 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 29 00:41:07.475191 kernel: Trampoline variant of Tasks RCU enabled. Oct 29 00:41:07.475200 kernel: Rude variant of Tasks RCU enabled. Oct 29 00:41:07.475209 kernel: Tracing variant of Tasks RCU enabled. Oct 29 00:41:07.475217 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 00:41:07.475225 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 29 00:41:07.475235 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 00:41:07.475245 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 00:41:07.475254 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 00:41:07.475263 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 29 00:41:07.475271 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 29 00:41:07.475280 kernel: Console: colour dummy device 80x25 Oct 29 00:41:07.475288 kernel: printk: legacy console [ttyS0] enabled Oct 29 00:41:07.475296 kernel: ACPI: Core revision 20240827 Oct 29 00:41:07.475307 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 29 00:41:07.475316 kernel: APIC: Switch to symmetric I/O mode setup Oct 29 00:41:07.475324 kernel: x2apic enabled Oct 29 00:41:07.475332 kernel: APIC: Switched APIC routing to: physical x2apic Oct 29 00:41:07.475341 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 29 00:41:07.475349 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 29 00:41:07.475357 kernel: kvm-guest: setup PV IPIs Oct 29 00:41:07.475368 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 29 00:41:07.475376 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 29 00:41:07.475385 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 29 00:41:07.475393 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 29 00:41:07.475402 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 29 00:41:07.475410 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 29 00:41:07.475418 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 29 00:41:07.475429 kernel: Spectre V2 : Mitigation: Retpolines Oct 29 00:41:07.475437 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 29 00:41:07.475445 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 29 00:41:07.475454 kernel: active return thunk: retbleed_return_thunk Oct 29 00:41:07.475462 kernel: RETBleed: Mitigation: untrained return thunk Oct 29 00:41:07.475471 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 29 00:41:07.475479 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 29 00:41:07.475489 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 29 00:41:07.475498 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 29 00:41:07.475507 kernel: active return thunk: srso_return_thunk Oct 29 00:41:07.475515 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 29 00:41:07.475524 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 29 00:41:07.475532 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 29 00:41:07.475540 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 29 00:41:07.475551 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 29 00:41:07.475559 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 29 00:41:07.475568 kernel: Freeing SMP alternatives memory: 32K Oct 29 00:41:07.475576 kernel: pid_max: default: 32768 minimum: 301 Oct 29 00:41:07.475584 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 29 00:41:07.475592 kernel: landlock: Up and running. Oct 29 00:41:07.475601 kernel: SELinux: Initializing. Oct 29 00:41:07.475611 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 00:41:07.475620 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 00:41:07.475628 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 29 00:41:07.475637 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 29 00:41:07.475645 kernel: ... version: 0 Oct 29 00:41:07.475654 kernel: ... bit width: 48 Oct 29 00:41:07.475662 kernel: ... generic registers: 6 Oct 29 00:41:07.475673 kernel: ... value mask: 0000ffffffffffff Oct 29 00:41:07.475681 kernel: ... max period: 00007fffffffffff Oct 29 00:41:07.475689 kernel: ... fixed-purpose events: 0 Oct 29 00:41:07.475698 kernel: ... event mask: 000000000000003f Oct 29 00:41:07.475706 kernel: signal: max sigframe size: 1776 Oct 29 00:41:07.475714 kernel: rcu: Hierarchical SRCU implementation. Oct 29 00:41:07.475723 kernel: rcu: Max phase no-delay instances is 400. Oct 29 00:41:07.475731 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 29 00:41:07.475742 kernel: smp: Bringing up secondary CPUs ... Oct 29 00:41:07.475750 kernel: smpboot: x86: Booting SMP configuration: Oct 29 00:41:07.475758 kernel: .... node #0, CPUs: #1 #2 #3 Oct 29 00:41:07.475766 kernel: smp: Brought up 1 node, 4 CPUs Oct 29 00:41:07.475775 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 29 00:41:07.475784 kernel: Memory: 2431744K/2552216K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114536K reserved, 0K cma-reserved) Oct 29 00:41:07.475792 kernel: devtmpfs: initialized Oct 29 00:41:07.475803 kernel: x86/mm: Memory block size: 128MB Oct 29 00:41:07.475811 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Oct 29 00:41:07.475820 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Oct 29 00:41:07.475828 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 00:41:07.475837 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 29 00:41:07.475845 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 00:41:07.475853 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 00:41:07.475864 kernel: audit: initializing netlink subsys (disabled) Oct 29 00:41:07.475872 kernel: audit: type=2000 audit(1761698464.353:1): state=initialized audit_enabled=0 res=1 Oct 29 00:41:07.475880 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 00:41:07.475888 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 29 00:41:07.475897 kernel: cpuidle: using governor menu Oct 29 00:41:07.475905 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 00:41:07.475913 kernel: dca service started, version 1.12.1 Oct 29 00:41:07.475936 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 29 00:41:07.475945 kernel: PCI: Using configuration type 1 for base access Oct 29 00:41:07.475953 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 29 00:41:07.475962 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 00:41:07.475970 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 29 00:41:07.475979 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 00:41:07.475987 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 29 00:41:07.475998 kernel: ACPI: Added _OSI(Module Device) Oct 29 00:41:07.476006 kernel: ACPI: Added _OSI(Processor Device) Oct 29 00:41:07.476014 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 00:41:07.476023 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 00:41:07.476031 kernel: ACPI: Interpreter enabled Oct 29 00:41:07.476039 kernel: ACPI: PM: (supports S0 S5) Oct 29 00:41:07.476048 kernel: ACPI: Using IOAPIC for interrupt routing Oct 29 00:41:07.476058 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 29 00:41:07.476066 kernel: PCI: Using E820 reservations for host bridge windows Oct 29 00:41:07.476075 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 29 00:41:07.476083 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 29 00:41:07.476338 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 00:41:07.476518 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 29 00:41:07.476695 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 29 00:41:07.476707 kernel: PCI host bridge to bus 0000:00 Oct 29 00:41:07.476877 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 29 00:41:07.477050 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 29 00:41:07.477217 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 29 00:41:07.477413 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 29 00:41:07.477582 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 29 00:41:07.477737 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 29 00:41:07.477906 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 29 00:41:07.478122 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 29 00:41:07.478330 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 29 00:41:07.478520 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 29 00:41:07.478841 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 29 00:41:07.479030 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 29 00:41:07.479217 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 29 00:41:07.479401 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 29 00:41:07.479573 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 29 00:41:07.479746 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 29 00:41:07.479914 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 29 00:41:07.480113 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 29 00:41:07.480294 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 29 00:41:07.480464 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 29 00:41:07.480633 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 29 00:41:07.480816 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 29 00:41:07.481008 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 29 00:41:07.481189 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 29 00:41:07.481369 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 29 00:41:07.481539 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 29 00:41:07.481721 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 29 00:41:07.481890 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 29 00:41:07.482084 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 29 00:41:07.482267 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 29 00:41:07.482437 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 29 00:41:07.482617 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 29 00:41:07.482792 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 29 00:41:07.482804 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 29 00:41:07.482813 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 29 00:41:07.482822 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 29 00:41:07.482830 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 29 00:41:07.482839 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 29 00:41:07.482850 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 29 00:41:07.482859 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 29 00:41:07.482868 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 29 00:41:07.482876 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 29 00:41:07.482884 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 29 00:41:07.482893 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 29 00:41:07.482901 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 29 00:41:07.482912 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 29 00:41:07.482920 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 29 00:41:07.482942 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 29 00:41:07.482950 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 29 00:41:07.482959 kernel: iommu: Default domain type: Translated Oct 29 00:41:07.482967 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 29 00:41:07.482976 kernel: efivars: Registered efivars operations Oct 29 00:41:07.482987 kernel: PCI: Using ACPI for IRQ routing Oct 29 00:41:07.482995 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 29 00:41:07.483004 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Oct 29 00:41:07.483012 kernel: e820: reserve RAM buffer [mem 0x9a100018-0x9bffffff] Oct 29 00:41:07.483020 kernel: e820: reserve RAM buffer [mem 0x9a13d018-0x9bffffff] Oct 29 00:41:07.483028 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Oct 29 00:41:07.483037 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Oct 29 00:41:07.483217 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 29 00:41:07.483394 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 29 00:41:07.483564 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 29 00:41:07.483576 kernel: vgaarb: loaded Oct 29 00:41:07.483584 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 29 00:41:07.483593 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 29 00:41:07.483601 kernel: clocksource: Switched to clocksource kvm-clock Oct 29 00:41:07.483613 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 00:41:07.483622 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 00:41:07.483630 kernel: pnp: PnP ACPI init Oct 29 00:41:07.483815 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 29 00:41:07.483828 kernel: pnp: PnP ACPI: found 6 devices Oct 29 00:41:07.483836 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 29 00:41:07.483845 kernel: NET: Registered PF_INET protocol family Oct 29 00:41:07.483857 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 29 00:41:07.483866 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 29 00:41:07.483874 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 00:41:07.483883 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 00:41:07.483892 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 29 00:41:07.483900 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 29 00:41:07.483909 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 00:41:07.483920 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 00:41:07.483945 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 00:41:07.483954 kernel: NET: Registered PF_XDP protocol family Oct 29 00:41:07.484125 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 29 00:41:07.484341 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 29 00:41:07.484511 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 29 00:41:07.484673 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 29 00:41:07.484829 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 29 00:41:07.485002 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 29 00:41:07.485175 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 29 00:41:07.485353 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 29 00:41:07.485364 kernel: PCI: CLS 0 bytes, default 64 Oct 29 00:41:07.485373 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 29 00:41:07.485385 kernel: Initialise system trusted keyrings Oct 29 00:41:07.485394 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 29 00:41:07.485402 kernel: Key type asymmetric registered Oct 29 00:41:07.485411 kernel: Asymmetric key parser 'x509' registered Oct 29 00:41:07.485444 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 29 00:41:07.485460 kernel: io scheduler mq-deadline registered Oct 29 00:41:07.485469 kernel: io scheduler kyber registered Oct 29 00:41:07.485484 kernel: io scheduler bfq registered Oct 29 00:41:07.485492 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 29 00:41:07.485501 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 29 00:41:07.485510 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 29 00:41:07.485519 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 29 00:41:07.485528 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 00:41:07.485537 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 29 00:41:07.485553 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 29 00:41:07.485561 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 29 00:41:07.485570 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 29 00:41:07.485747 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 29 00:41:07.485760 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 29 00:41:07.485920 kernel: rtc_cmos 00:04: registered as rtc0 Oct 29 00:41:07.486116 kernel: rtc_cmos 00:04: setting system clock to 2025-10-29T00:41:05 UTC (1761698465) Oct 29 00:41:07.486309 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 29 00:41:07.486322 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 29 00:41:07.486331 kernel: efifb: probing for efifb Oct 29 00:41:07.486340 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 29 00:41:07.486349 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 29 00:41:07.486357 kernel: efifb: scrolling: redraw Oct 29 00:41:07.486376 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 29 00:41:07.486385 kernel: Console: switching to colour frame buffer device 160x50 Oct 29 00:41:07.486400 kernel: fb0: EFI VGA frame buffer device Oct 29 00:41:07.486409 kernel: pstore: Using crash dump compression: deflate Oct 29 00:41:07.486419 kernel: pstore: Registered efi_pstore as persistent store backend Oct 29 00:41:07.486435 kernel: NET: Registered PF_INET6 protocol family Oct 29 00:41:07.486443 kernel: Segment Routing with IPv6 Oct 29 00:41:07.486452 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 00:41:07.486461 kernel: NET: Registered PF_PACKET protocol family Oct 29 00:41:07.486469 kernel: Key type dns_resolver registered Oct 29 00:41:07.486478 kernel: IPI shorthand broadcast: enabled Oct 29 00:41:07.486487 kernel: sched_clock: Marking stable (1480002764, 259451527)->(1793391499, -53937208) Oct 29 00:41:07.486502 kernel: registered taskstats version 1 Oct 29 00:41:07.486511 kernel: Loading compiled-in X.509 certificates Oct 29 00:41:07.486520 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 4eb70affb0e364bb9bcbea2a9416e57c31aed070' Oct 29 00:41:07.486529 kernel: Demotion targets for Node 0: null Oct 29 00:41:07.486538 kernel: Key type .fscrypt registered Oct 29 00:41:07.486546 kernel: Key type fscrypt-provisioning registered Oct 29 00:41:07.486555 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 00:41:07.486576 kernel: ima: Allocated hash algorithm: sha1 Oct 29 00:41:07.486585 kernel: ima: No architecture policies found Oct 29 00:41:07.486594 kernel: clk: Disabling unused clocks Oct 29 00:41:07.486602 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 29 00:41:07.486611 kernel: Write protecting the kernel read-only data: 40960k Oct 29 00:41:07.486620 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 29 00:41:07.486629 kernel: Run /init as init process Oct 29 00:41:07.486644 kernel: with arguments: Oct 29 00:41:07.486653 kernel: /init Oct 29 00:41:07.486661 kernel: with environment: Oct 29 00:41:07.486670 kernel: HOME=/ Oct 29 00:41:07.486679 kernel: TERM=linux Oct 29 00:41:07.486687 kernel: SCSI subsystem initialized Oct 29 00:41:07.486696 kernel: libata version 3.00 loaded. Oct 29 00:41:07.486878 kernel: ahci 0000:00:1f.2: version 3.0 Oct 29 00:41:07.486890 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 29 00:41:07.487095 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 29 00:41:07.487278 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 29 00:41:07.487448 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 29 00:41:07.487642 kernel: scsi host0: ahci Oct 29 00:41:07.487842 kernel: scsi host1: ahci Oct 29 00:41:07.488060 kernel: scsi host2: ahci Oct 29 00:41:07.488252 kernel: scsi host3: ahci Oct 29 00:41:07.488436 kernel: scsi host4: ahci Oct 29 00:41:07.488615 kernel: scsi host5: ahci Oct 29 00:41:07.488628 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 29 00:41:07.488688 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 29 00:41:07.488697 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 29 00:41:07.488706 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 29 00:41:07.488714 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 29 00:41:07.488723 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 29 00:41:07.488732 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 29 00:41:07.488749 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 29 00:41:07.488757 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 29 00:41:07.488766 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 29 00:41:07.488775 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 29 00:41:07.488784 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 29 00:41:07.488792 kernel: ata3.00: LPM support broken, forcing max_power Oct 29 00:41:07.488801 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 29 00:41:07.488810 kernel: ata3.00: applying bridge limits Oct 29 00:41:07.488826 kernel: ata3.00: LPM support broken, forcing max_power Oct 29 00:41:07.488835 kernel: ata3.00: configured for UDMA/100 Oct 29 00:41:07.489055 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 29 00:41:07.489250 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 29 00:41:07.489477 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 29 00:41:07.489492 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 00:41:07.489514 kernel: GPT:16515071 != 27000831 Oct 29 00:41:07.489523 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 00:41:07.489531 kernel: GPT:16515071 != 27000831 Oct 29 00:41:07.489540 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 00:41:07.489549 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 00:41:07.489558 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:41:07.489751 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 29 00:41:07.489774 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 29 00:41:07.489977 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 29 00:41:07.489990 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 00:41:07.489999 kernel: device-mapper: uevent: version 1.0.3 Oct 29 00:41:07.490008 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 29 00:41:07.490017 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 29 00:41:07.490037 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:41:07.490045 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:41:07.490054 kernel: raid6: avx2x4 gen() 28519 MB/s Oct 29 00:41:07.490063 kernel: raid6: avx2x2 gen() 30479 MB/s Oct 29 00:41:07.490072 kernel: raid6: avx2x1 gen() 25554 MB/s Oct 29 00:41:07.490081 kernel: raid6: using algorithm avx2x2 gen() 30479 MB/s Oct 29 00:41:07.490089 kernel: raid6: .... xor() 19508 MB/s, rmw enabled Oct 29 00:41:07.490098 kernel: raid6: using avx2x2 recovery algorithm Oct 29 00:41:07.490117 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:41:07.490126 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:41:07.490134 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:41:07.490143 kernel: xor: automatically using best checksumming function avx Oct 29 00:41:07.490152 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:41:07.490161 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 29 00:41:07.490170 kernel: BTRFS: device fsid c0171910-1eb4-4fd7-b94c-9d6b11be282f devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (176) Oct 29 00:41:07.490188 kernel: BTRFS info (device dm-0): first mount of filesystem c0171910-1eb4-4fd7-b94c-9d6b11be282f Oct 29 00:41:07.490206 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:41:07.490216 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 29 00:41:07.490225 kernel: BTRFS info (device dm-0): enabling free space tree Oct 29 00:41:07.490235 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:41:07.490243 kernel: loop: module loaded Oct 29 00:41:07.490252 kernel: loop0: detected capacity change from 0 to 100120 Oct 29 00:41:07.490261 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 00:41:07.490277 systemd[1]: Successfully made /usr/ read-only. Oct 29 00:41:07.490290 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 00:41:07.490300 systemd[1]: Detected virtualization kvm. Oct 29 00:41:07.490309 systemd[1]: Detected architecture x86-64. Oct 29 00:41:07.490318 systemd[1]: Running in initrd. Oct 29 00:41:07.490327 systemd[1]: No hostname configured, using default hostname. Oct 29 00:41:07.490343 systemd[1]: Hostname set to . Oct 29 00:41:07.490352 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 00:41:07.490361 systemd[1]: Queued start job for default target initrd.target. Oct 29 00:41:07.490371 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 00:41:07.490380 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 00:41:07.490390 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 00:41:07.490406 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 29 00:41:07.490416 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 00:41:07.490426 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 29 00:41:07.490436 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 29 00:41:07.490445 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 00:41:07.490455 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 00:41:07.490476 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 29 00:41:07.490485 systemd[1]: Reached target paths.target - Path Units. Oct 29 00:41:07.490495 systemd[1]: Reached target slices.target - Slice Units. Oct 29 00:41:07.490504 systemd[1]: Reached target swap.target - Swaps. Oct 29 00:41:07.490513 systemd[1]: Reached target timers.target - Timer Units. Oct 29 00:41:07.490523 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 00:41:07.490538 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 00:41:07.490548 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 29 00:41:07.490557 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 29 00:41:07.490566 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 00:41:07.490576 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 00:41:07.490585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 00:41:07.490595 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 00:41:07.490610 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 29 00:41:07.490620 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 29 00:41:07.490629 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 00:41:07.490638 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 29 00:41:07.490648 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 29 00:41:07.490658 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 00:41:07.490668 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 00:41:07.490683 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 00:41:07.490692 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:41:07.490702 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 29 00:41:07.490718 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 00:41:07.490727 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 00:41:07.490737 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 00:41:07.490775 systemd-journald[310]: Collecting audit messages is disabled. Oct 29 00:41:07.490803 systemd-journald[310]: Journal started Oct 29 00:41:07.490822 systemd-journald[310]: Runtime Journal (/run/log/journal/855d797030df446489b4e9fb9b31ff27) is 5.9M, max 47.9M, 41.9M free. Oct 29 00:41:07.493475 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 00:41:07.610734 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 00:41:07.615701 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 00:41:07.617636 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 00:41:07.622040 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 00:41:07.627014 kernel: Bridge firewalling registered Oct 29 00:41:07.626723 systemd-modules-load[313]: Inserted module 'br_netfilter' Oct 29 00:41:07.631351 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 00:41:07.635139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:41:07.639339 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 29 00:41:07.639463 systemd-tmpfiles[330]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 29 00:41:07.643033 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 00:41:07.645051 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 00:41:07.647250 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 00:41:07.668669 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 00:41:07.671099 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 00:41:07.683516 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 00:41:07.686044 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 29 00:41:07.714810 dracut-cmdline[356]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=54ef1c344b2a47697b32f3227bd37f41d37acb1889c1eaea33b22ce408b7b3ae Oct 29 00:41:07.731857 systemd-resolved[347]: Positive Trust Anchors: Oct 29 00:41:07.731874 systemd-resolved[347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 00:41:07.731878 systemd-resolved[347]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 00:41:07.731909 systemd-resolved[347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 00:41:07.772267 systemd-resolved[347]: Defaulting to hostname 'linux'. Oct 29 00:41:07.773538 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 00:41:07.775439 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 00:41:07.840966 kernel: Loading iSCSI transport class v2.0-870. Oct 29 00:41:07.854954 kernel: iscsi: registered transport (tcp) Oct 29 00:41:07.879979 kernel: iscsi: registered transport (qla4xxx) Oct 29 00:41:07.880010 kernel: QLogic iSCSI HBA Driver Oct 29 00:41:07.908607 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 00:41:08.006486 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 00:41:08.010382 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 00:41:08.067917 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 29 00:41:08.071403 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 29 00:41:08.073677 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 29 00:41:08.117721 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 29 00:41:08.120212 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 00:41:08.151626 systemd-udevd[592]: Using default interface naming scheme 'v257'. Oct 29 00:41:08.165433 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 00:41:08.171522 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 29 00:41:08.199465 dracut-pre-trigger[653]: rd.md=0: removing MD RAID activation Oct 29 00:41:08.214087 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 00:41:08.218498 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 00:41:08.233185 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 00:41:08.238663 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 00:41:08.345004 systemd-networkd[718]: lo: Link UP Oct 29 00:41:08.345013 systemd-networkd[718]: lo: Gained carrier Oct 29 00:41:08.345616 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 00:41:08.348263 systemd[1]: Reached target network.target - Network. Oct 29 00:41:08.404619 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 00:41:08.410280 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 29 00:41:08.455574 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 29 00:41:08.471967 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 29 00:41:08.503393 kernel: cryptd: max_cpu_qlen set to 1000 Oct 29 00:41:08.527617 kernel: AES CTR mode by8 optimization enabled Oct 29 00:41:08.534921 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 00:41:08.537910 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 29 00:41:08.536376 systemd-networkd[718]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:41:08.536381 systemd-networkd[718]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 00:41:08.537157 systemd-networkd[718]: eth0: Link UP Oct 29 00:41:08.537370 systemd-networkd[718]: eth0: Gained carrier Oct 29 00:41:08.537379 systemd-networkd[718]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:41:08.553020 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 29 00:41:08.557413 systemd-networkd[718]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 00:41:08.563668 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 29 00:41:08.568458 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:41:08.569566 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:41:08.570823 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:41:08.592893 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:41:08.602670 disk-uuid[829]: Primary Header is updated. Oct 29 00:41:08.602670 disk-uuid[829]: Secondary Entries is updated. Oct 29 00:41:08.602670 disk-uuid[829]: Secondary Header is updated. Oct 29 00:41:08.650270 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:41:08.679076 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 29 00:41:08.687059 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 00:41:08.692645 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 00:41:08.696600 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 00:41:08.701031 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 29 00:41:08.729106 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 29 00:41:09.677295 disk-uuid[831]: Warning: The kernel is still using the old partition table. Oct 29 00:41:09.677295 disk-uuid[831]: The new table will be used at the next reboot or after you Oct 29 00:41:09.677295 disk-uuid[831]: run partprobe(8) or kpartx(8) Oct 29 00:41:09.677295 disk-uuid[831]: The operation has completed successfully. Oct 29 00:41:09.684876 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 00:41:09.685081 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 29 00:41:09.689479 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 29 00:41:09.741947 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (858) Oct 29 00:41:09.742009 kernel: BTRFS info (device vda6): first mount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:41:09.745039 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:41:09.748820 kernel: BTRFS info (device vda6): turning on async discard Oct 29 00:41:09.748849 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 00:41:09.756970 kernel: BTRFS info (device vda6): last unmount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:41:09.757862 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 29 00:41:09.760974 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 29 00:41:10.000507 ignition[877]: Ignition 2.22.0 Oct 29 00:41:10.000520 ignition[877]: Stage: fetch-offline Oct 29 00:41:10.000578 ignition[877]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:41:10.000590 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:41:10.000698 ignition[877]: parsed url from cmdline: "" Oct 29 00:41:10.000702 ignition[877]: no config URL provided Oct 29 00:41:10.000707 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 00:41:10.000719 ignition[877]: no config at "/usr/lib/ignition/user.ign" Oct 29 00:41:10.000766 ignition[877]: op(1): [started] loading QEMU firmware config module Oct 29 00:41:10.000771 ignition[877]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 29 00:41:10.011485 ignition[877]: op(1): [finished] loading QEMU firmware config module Oct 29 00:41:10.098187 ignition[877]: parsing config with SHA512: b58722b7a64ef0e3be941ca6a10f40d21479e1f6fc17084294b5518270366ed1169653f10aca217f41783c208835650dec606b03b6bfe75764f3e22c76e8db27 Oct 29 00:41:10.102772 unknown[877]: fetched base config from "system" Oct 29 00:41:10.102785 unknown[877]: fetched user config from "qemu" Oct 29 00:41:10.103176 ignition[877]: fetch-offline: fetch-offline passed Oct 29 00:41:10.103262 ignition[877]: Ignition finished successfully Oct 29 00:41:10.111519 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 00:41:10.115413 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 29 00:41:10.118616 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 29 00:41:10.214055 ignition[888]: Ignition 2.22.0 Oct 29 00:41:10.214068 ignition[888]: Stage: kargs Oct 29 00:41:10.214239 ignition[888]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:41:10.214248 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:41:10.214971 ignition[888]: kargs: kargs passed Oct 29 00:41:10.215023 ignition[888]: Ignition finished successfully Oct 29 00:41:10.225658 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 29 00:41:10.227577 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 29 00:41:10.264525 ignition[896]: Ignition 2.22.0 Oct 29 00:41:10.264540 ignition[896]: Stage: disks Oct 29 00:41:10.264738 ignition[896]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:41:10.264748 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:41:10.266170 ignition[896]: disks: disks passed Oct 29 00:41:10.266220 ignition[896]: Ignition finished successfully Oct 29 00:41:10.275253 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 29 00:41:10.278640 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 29 00:41:10.279318 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 29 00:41:10.282821 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 00:41:10.286650 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 00:41:10.289833 systemd[1]: Reached target basic.target - Basic System. Oct 29 00:41:10.291459 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 29 00:41:10.333415 systemd-fsck[906]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 29 00:41:10.341355 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 29 00:41:10.343475 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 29 00:41:10.469975 kernel: EXT4-fs (vda9): mounted filesystem ef53721c-fae5-4ad9-8976-8181c84bc175 r/w with ordered data mode. Quota mode: none. Oct 29 00:41:10.470793 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 29 00:41:10.472154 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 29 00:41:10.477206 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 00:41:10.478697 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 29 00:41:10.480695 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 29 00:41:10.480729 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 00:41:10.480750 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 00:41:10.501409 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 29 00:41:10.503779 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 29 00:41:10.512868 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (915) Oct 29 00:41:10.512891 kernel: BTRFS info (device vda6): first mount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:41:10.512908 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:41:10.516492 kernel: BTRFS info (device vda6): turning on async discard Oct 29 00:41:10.516529 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 00:41:10.517964 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 00:41:10.518108 systemd-networkd[718]: eth0: Gained IPv6LL Oct 29 00:41:10.566583 initrd-setup-root[939]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 00:41:10.572588 initrd-setup-root[946]: cut: /sysroot/etc/group: No such file or directory Oct 29 00:41:10.578587 initrd-setup-root[953]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 00:41:10.584334 initrd-setup-root[960]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 00:41:10.681613 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 29 00:41:10.685208 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 29 00:41:10.687616 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 29 00:41:10.707956 kernel: BTRFS info (device vda6): last unmount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:41:10.719469 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 29 00:41:10.724158 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 29 00:41:10.749829 ignition[1029]: INFO : Ignition 2.22.0 Oct 29 00:41:10.749829 ignition[1029]: INFO : Stage: mount Oct 29 00:41:10.752460 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:41:10.752460 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:41:10.752460 ignition[1029]: INFO : mount: mount passed Oct 29 00:41:10.752460 ignition[1029]: INFO : Ignition finished successfully Oct 29 00:41:10.754241 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 29 00:41:10.758828 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 29 00:41:10.785920 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 00:41:10.817920 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1041) Oct 29 00:41:10.818059 kernel: BTRFS info (device vda6): first mount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:41:10.818073 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:41:10.823222 kernel: BTRFS info (device vda6): turning on async discard Oct 29 00:41:10.823294 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 00:41:10.824946 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 00:41:10.866736 ignition[1058]: INFO : Ignition 2.22.0 Oct 29 00:41:10.866736 ignition[1058]: INFO : Stage: files Oct 29 00:41:10.869563 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:41:10.869563 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:41:10.869563 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Oct 29 00:41:10.875247 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 00:41:10.875247 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 00:41:10.880094 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 00:41:10.880094 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 00:41:10.880094 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 00:41:10.880094 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 29 00:41:10.880094 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 29 00:41:10.876684 unknown[1058]: wrote ssh authorized keys file for user: core Oct 29 00:41:10.920942 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 29 00:41:11.002677 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 29 00:41:11.006181 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 29 00:41:11.006181 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 00:41:11.006181 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 00:41:11.006181 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 00:41:11.006181 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 00:41:11.006181 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 00:41:11.006181 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 00:41:11.006181 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 00:41:11.029291 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 00:41:11.029291 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 00:41:11.029291 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 29 00:41:11.029291 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 29 00:41:11.029291 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 29 00:41:11.029291 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 29 00:41:11.347824 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 29 00:41:11.836549 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 29 00:41:11.836549 ignition[1058]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 29 00:41:11.842803 ignition[1058]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 00:41:11.842803 ignition[1058]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 00:41:11.842803 ignition[1058]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 29 00:41:11.842803 ignition[1058]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 29 00:41:11.842803 ignition[1058]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 00:41:11.842803 ignition[1058]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 00:41:11.842803 ignition[1058]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 29 00:41:11.842803 ignition[1058]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 29 00:41:11.865796 ignition[1058]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 00:41:11.872534 ignition[1058]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 00:41:11.875227 ignition[1058]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 29 00:41:11.875227 ignition[1058]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 29 00:41:11.875227 ignition[1058]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 00:41:11.875227 ignition[1058]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 00:41:11.875227 ignition[1058]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 00:41:11.875227 ignition[1058]: INFO : files: files passed Oct 29 00:41:11.875227 ignition[1058]: INFO : Ignition finished successfully Oct 29 00:41:11.890867 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 29 00:41:11.895569 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 29 00:41:11.899875 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 29 00:41:11.911176 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 00:41:11.911315 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 29 00:41:11.918105 initrd-setup-root-after-ignition[1087]: grep: /sysroot/oem/oem-release: No such file or directory Oct 29 00:41:11.921871 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:41:11.924718 initrd-setup-root-after-ignition[1089]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:41:11.927556 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:41:11.931952 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 00:41:11.932818 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 29 00:41:11.939479 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 29 00:41:11.996675 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 00:41:11.996818 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 29 00:41:11.997885 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 29 00:41:12.002791 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 29 00:41:12.006729 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 29 00:41:12.008735 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 29 00:41:12.050005 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 00:41:12.052181 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 29 00:41:12.076333 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 00:41:12.076560 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 29 00:41:12.080126 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 00:41:12.081020 systemd[1]: Stopped target timers.target - Timer Units. Oct 29 00:41:12.086062 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 00:41:12.086185 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 00:41:12.091635 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 29 00:41:12.095172 systemd[1]: Stopped target basic.target - Basic System. Oct 29 00:41:12.096010 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 29 00:41:12.096538 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 00:41:12.106069 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 29 00:41:12.106778 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 29 00:41:12.110473 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 29 00:41:12.113627 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 00:41:12.116792 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 29 00:41:12.120688 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 29 00:41:12.123757 systemd[1]: Stopped target swap.target - Swaps. Oct 29 00:41:12.126762 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 00:41:12.126889 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 29 00:41:12.131792 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 29 00:41:12.132664 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 00:41:12.137316 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 29 00:41:12.140364 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 00:41:12.143874 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 00:41:12.144022 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 29 00:41:12.149174 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 00:41:12.149292 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 00:41:12.150353 systemd[1]: Stopped target paths.target - Path Units. Oct 29 00:41:12.154599 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 00:41:12.157996 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 00:41:12.159452 systemd[1]: Stopped target slices.target - Slice Units. Oct 29 00:41:12.162771 systemd[1]: Stopped target sockets.target - Socket Units. Oct 29 00:41:12.166556 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 00:41:12.166643 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 00:41:12.169631 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 00:41:12.169707 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 00:41:12.172489 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 00:41:12.172598 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 00:41:12.175053 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 00:41:12.175156 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 29 00:41:12.179473 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 29 00:41:12.180963 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 00:41:12.181086 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 00:41:12.184589 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 29 00:41:12.187346 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 29 00:41:12.187464 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 00:41:12.191394 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 00:41:12.191500 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 00:41:12.195469 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 00:41:12.195587 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 00:41:12.207239 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 00:41:12.208347 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 29 00:41:12.227623 ignition[1113]: INFO : Ignition 2.22.0 Oct 29 00:41:12.227623 ignition[1113]: INFO : Stage: umount Oct 29 00:41:12.230223 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:41:12.230223 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:41:12.230223 ignition[1113]: INFO : umount: umount passed Oct 29 00:41:12.230223 ignition[1113]: INFO : Ignition finished successfully Oct 29 00:41:12.232603 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 00:41:12.232736 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 29 00:41:12.235225 systemd[1]: Stopped target network.target - Network. Oct 29 00:41:12.238501 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 00:41:12.238568 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 29 00:41:12.239346 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 00:41:12.239397 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 29 00:41:12.244440 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 00:41:12.244497 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 29 00:41:12.247378 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 29 00:41:12.247431 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 29 00:41:12.252775 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 29 00:41:12.253742 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 29 00:41:12.261454 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 00:41:12.264278 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 00:41:12.264483 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 29 00:41:12.272061 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 00:41:12.272192 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 29 00:41:12.278627 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 29 00:41:12.279588 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 00:41:12.279644 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 29 00:41:12.288592 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 29 00:41:12.289287 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 00:41:12.289370 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 00:41:12.289897 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 00:41:12.289960 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 29 00:41:12.296552 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 00:41:12.296618 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 29 00:41:12.297107 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 00:41:12.311699 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 00:41:12.311832 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 29 00:41:12.314531 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 00:41:12.314671 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 29 00:41:12.329103 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 00:41:12.336361 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 00:41:12.338847 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 00:41:12.338894 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 29 00:41:12.342414 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 00:41:12.342454 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 00:41:12.345600 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 00:41:12.345657 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 29 00:41:12.350703 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 00:41:12.350759 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 29 00:41:12.355771 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 00:41:12.355831 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 00:41:12.364471 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 29 00:41:12.368327 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 29 00:41:12.368397 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 00:41:12.369285 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 29 00:41:12.369334 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 00:41:12.369877 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:41:12.369939 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:41:12.380053 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 00:41:12.380170 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 29 00:41:12.414059 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 00:41:12.414200 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 29 00:41:12.415612 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 29 00:41:12.423388 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 29 00:41:12.443130 systemd[1]: Switching root. Oct 29 00:41:12.483144 systemd-journald[310]: Journal stopped Oct 29 00:41:14.241249 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Oct 29 00:41:14.241317 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 00:41:14.241337 kernel: SELinux: policy capability open_perms=1 Oct 29 00:41:14.241349 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 00:41:14.241361 kernel: SELinux: policy capability always_check_network=0 Oct 29 00:41:14.241385 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 00:41:14.241398 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 00:41:14.241410 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 00:41:14.241422 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 00:41:14.241439 kernel: SELinux: policy capability userspace_initial_context=0 Oct 29 00:41:14.241452 kernel: audit: type=1403 audit(1761698473.233:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 00:41:14.241473 systemd[1]: Successfully loaded SELinux policy in 153.836ms. Oct 29 00:41:14.241504 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.478ms. Oct 29 00:41:14.241518 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 00:41:14.241531 systemd[1]: Detected virtualization kvm. Oct 29 00:41:14.241544 systemd[1]: Detected architecture x86-64. Oct 29 00:41:14.241557 systemd[1]: Detected first boot. Oct 29 00:41:14.241570 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 00:41:14.241583 kernel: Guest personality initialized and is inactive Oct 29 00:41:14.241602 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 29 00:41:14.241614 kernel: Initialized host personality Oct 29 00:41:14.241626 zram_generator::config[1160]: No configuration found. Oct 29 00:41:14.241640 kernel: NET: Registered PF_VSOCK protocol family Oct 29 00:41:14.241652 systemd[1]: Populated /etc with preset unit settings. Oct 29 00:41:14.241665 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 29 00:41:14.241678 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 29 00:41:14.241698 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 29 00:41:14.241712 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 29 00:41:14.241729 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 29 00:41:14.241753 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 29 00:41:14.241767 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 29 00:41:14.241780 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 29 00:41:14.241795 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 29 00:41:14.241820 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 29 00:41:14.241833 systemd[1]: Created slice user.slice - User and Session Slice. Oct 29 00:41:14.241846 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 00:41:14.241859 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 00:41:14.241874 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 29 00:41:14.241893 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 29 00:41:14.241908 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 29 00:41:14.241953 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 00:41:14.241967 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 29 00:41:14.241980 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 00:41:14.242001 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 00:41:14.242014 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 29 00:41:14.242026 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 29 00:41:14.242047 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 29 00:41:14.242061 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 29 00:41:14.242074 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 00:41:14.242089 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 00:41:14.242102 systemd[1]: Reached target slices.target - Slice Units. Oct 29 00:41:14.242115 systemd[1]: Reached target swap.target - Swaps. Oct 29 00:41:14.242127 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 29 00:41:14.242147 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 29 00:41:14.242160 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 29 00:41:14.242173 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 00:41:14.242186 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 00:41:14.242201 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 00:41:14.242213 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 29 00:41:14.242226 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 29 00:41:14.242246 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 29 00:41:14.242259 systemd[1]: Mounting media.mount - External Media Directory... Oct 29 00:41:14.242272 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:41:14.242285 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 29 00:41:14.242298 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 29 00:41:14.242313 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 29 00:41:14.242327 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 00:41:14.242347 systemd[1]: Reached target machines.target - Containers. Oct 29 00:41:14.242360 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 29 00:41:14.242372 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:41:14.242386 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 00:41:14.242398 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 29 00:41:14.242414 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:41:14.242427 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 00:41:14.242447 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:41:14.242460 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 29 00:41:14.242473 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:41:14.242485 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 00:41:14.242498 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 29 00:41:14.242512 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 29 00:41:14.242525 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 29 00:41:14.242544 systemd[1]: Stopped systemd-fsck-usr.service. Oct 29 00:41:14.242558 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:41:14.242571 kernel: fuse: init (API version 7.41) Oct 29 00:41:14.242584 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 00:41:14.242597 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 00:41:14.242610 kernel: ACPI: bus type drm_connector registered Oct 29 00:41:14.242623 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 00:41:14.242644 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 29 00:41:14.242657 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 29 00:41:14.242670 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 00:41:14.242690 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:41:14.242704 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 29 00:41:14.242735 systemd-journald[1244]: Collecting audit messages is disabled. Oct 29 00:41:14.242770 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 29 00:41:14.242785 systemd-journald[1244]: Journal started Oct 29 00:41:14.242816 systemd-journald[1244]: Runtime Journal (/run/log/journal/855d797030df446489b4e9fb9b31ff27) is 5.9M, max 47.9M, 41.9M free. Oct 29 00:41:14.247589 systemd[1]: Mounted media.mount - External Media Directory. Oct 29 00:41:13.914264 systemd[1]: Queued start job for default target multi-user.target. Oct 29 00:41:13.929017 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 29 00:41:13.929547 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 29 00:41:14.249972 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 29 00:41:14.252967 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 00:41:14.256000 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 29 00:41:14.257995 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 29 00:41:14.260057 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 29 00:41:14.262364 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 00:41:14.264679 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 00:41:14.264913 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 29 00:41:14.267183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:41:14.267415 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:41:14.269572 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 00:41:14.269786 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 00:41:14.271810 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:41:14.272070 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:41:14.274347 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 00:41:14.274559 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 29 00:41:14.276691 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:41:14.277011 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:41:14.279589 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 00:41:14.281913 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 00:41:14.285096 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 29 00:41:14.287618 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 29 00:41:14.304638 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 00:41:14.307196 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 29 00:41:14.310530 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 29 00:41:14.313401 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 29 00:41:14.315340 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 00:41:14.315374 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 00:41:14.318016 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 29 00:41:14.320434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:41:14.323194 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 29 00:41:14.325965 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 29 00:41:14.327914 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:41:14.330061 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 29 00:41:14.331903 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 00:41:14.333492 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 00:41:14.342076 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 29 00:41:14.346150 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 29 00:41:14.348212 systemd-journald[1244]: Time spent on flushing to /var/log/journal/855d797030df446489b4e9fb9b31ff27 is 16.204ms for 1029 entries. Oct 29 00:41:14.348212 systemd-journald[1244]: System Journal (/var/log/journal/855d797030df446489b4e9fb9b31ff27) is 8M, max 163.5M, 155.5M free. Oct 29 00:41:14.454259 systemd-journald[1244]: Received client request to flush runtime journal. Oct 29 00:41:14.454294 kernel: loop1: detected capacity change from 0 to 128048 Oct 29 00:41:14.454309 kernel: loop2: detected capacity change from 0 to 229808 Oct 29 00:41:14.349260 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 00:41:14.352987 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 29 00:41:14.355753 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 29 00:41:14.409401 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 00:41:14.439501 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 29 00:41:14.442393 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 29 00:41:14.444635 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 29 00:41:14.450135 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 29 00:41:14.455107 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 00:41:14.458055 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 00:41:14.463822 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 29 00:41:14.476328 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 29 00:41:14.516017 kernel: loop3: detected capacity change from 0 to 110976 Oct 29 00:41:14.530993 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Oct 29 00:41:14.531012 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Oct 29 00:41:14.537741 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 00:41:14.540157 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 29 00:41:14.550955 kernel: loop4: detected capacity change from 0 to 128048 Oct 29 00:41:14.564223 kernel: loop5: detected capacity change from 0 to 229808 Oct 29 00:41:14.564901 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 29 00:41:14.575958 kernel: loop6: detected capacity change from 0 to 110976 Oct 29 00:41:14.583520 (sd-merge)[1306]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 29 00:41:14.589261 (sd-merge)[1306]: Merged extensions into '/usr'. Oct 29 00:41:14.594830 systemd[1]: Reload requested from client PID 1279 ('systemd-sysext') (unit systemd-sysext.service)... Oct 29 00:41:14.594982 systemd[1]: Reloading... Oct 29 00:41:14.644864 systemd-resolved[1292]: Positive Trust Anchors: Oct 29 00:41:14.644886 systemd-resolved[1292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 00:41:14.644891 systemd-resolved[1292]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 00:41:14.644922 systemd-resolved[1292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 00:41:14.652447 systemd-resolved[1292]: Defaulting to hostname 'linux'. Oct 29 00:41:14.669952 zram_generator::config[1340]: No configuration found. Oct 29 00:41:14.964378 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 00:41:14.964517 systemd[1]: Reloading finished in 369 ms. Oct 29 00:41:15.063140 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 00:41:15.065378 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 29 00:41:15.069845 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 00:41:15.085486 systemd[1]: Starting ensure-sysext.service... Oct 29 00:41:15.087884 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 00:41:15.121171 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 29 00:41:15.121209 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 29 00:41:15.121666 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 00:41:15.122006 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 29 00:41:15.122981 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 00:41:15.123264 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Oct 29 00:41:15.123341 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Oct 29 00:41:15.128817 systemd[1]: Reload requested from client PID 1373 ('systemctl') (unit ensure-sysext.service)... Oct 29 00:41:15.128838 systemd[1]: Reloading... Oct 29 00:41:15.130901 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 00:41:15.130914 systemd-tmpfiles[1374]: Skipping /boot Oct 29 00:41:15.142378 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 00:41:15.142389 systemd-tmpfiles[1374]: Skipping /boot Oct 29 00:41:15.184982 zram_generator::config[1404]: No configuration found. Oct 29 00:41:15.395485 systemd[1]: Reloading finished in 265 ms. Oct 29 00:41:15.422025 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 29 00:41:15.453051 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 00:41:15.463824 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 00:41:15.466776 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 29 00:41:15.469971 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 29 00:41:15.478987 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 29 00:41:15.484790 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 00:41:15.490308 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 29 00:41:15.498893 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:41:15.499162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:41:15.512028 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:41:15.516190 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:41:15.527112 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:41:15.530131 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:41:15.530315 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:41:15.530480 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:41:15.532896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:41:15.533191 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:41:15.535834 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:41:15.536077 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:41:15.538699 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:41:15.538911 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:41:15.548964 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:41:15.549276 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 00:41:15.552336 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 29 00:41:15.559158 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:41:15.560121 augenrules[1474]: No rules Oct 29 00:41:15.560146 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:41:15.561869 systemd-udevd[1448]: Using default interface naming scheme 'v257'. Oct 29 00:41:15.562983 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:41:15.568129 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:41:15.581144 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:41:15.582910 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:41:15.583070 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:41:15.583171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:41:15.584445 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 00:41:15.584699 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 00:41:15.587496 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 29 00:41:15.590599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:41:15.590804 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:41:15.593342 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:41:15.593549 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:41:15.596312 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:41:15.596540 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:41:15.607990 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:41:15.612349 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 00:41:15.636253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:41:15.647246 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:41:15.650479 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 00:41:15.654222 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:41:15.663096 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:41:15.665238 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:41:15.665350 augenrules[1488]: /sbin/augenrules: No change Oct 29 00:41:15.665363 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:41:15.665490 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:41:15.666501 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 00:41:15.674602 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 29 00:41:15.676366 augenrules[1518]: No rules Oct 29 00:41:15.677576 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:41:15.677801 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:41:15.680369 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 00:41:15.680618 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 00:41:15.683080 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 00:41:15.683302 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 00:41:15.687013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:41:15.687234 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:41:15.690334 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:41:15.690909 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:41:15.706071 systemd[1]: Finished ensure-sysext.service. Oct 29 00:41:15.717378 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 00:41:15.719370 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:41:15.719460 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 00:41:15.723097 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 29 00:41:15.725099 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 00:41:15.842915 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 00:41:15.847026 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 29 00:41:15.857516 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 29 00:41:15.956919 systemd-networkd[1540]: lo: Link UP Oct 29 00:41:15.956955 systemd-networkd[1540]: lo: Gained carrier Oct 29 00:41:15.959072 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 00:41:15.961442 systemd[1]: Reached target network.target - Network. Oct 29 00:41:15.965097 systemd-networkd[1540]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:41:15.965105 systemd-networkd[1540]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 00:41:15.965807 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 29 00:41:15.967547 systemd-networkd[1540]: eth0: Link UP Oct 29 00:41:15.970161 systemd-networkd[1540]: eth0: Gained carrier Oct 29 00:41:15.970188 systemd-networkd[1540]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:41:15.971365 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 29 00:41:15.977981 kernel: mousedev: PS/2 mouse device common for all mice Oct 29 00:41:15.979548 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 29 00:41:15.980091 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 29 00:41:15.981765 systemd[1]: Reached target time-set.target - System Time Set. Oct 29 00:41:15.984001 systemd-networkd[1540]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 00:41:15.985869 systemd-timesyncd[1541]: Network configuration changed, trying to establish connection. Oct 29 00:41:16.470195 systemd-timesyncd[1541]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 29 00:41:16.470323 systemd-timesyncd[1541]: Initial clock synchronization to Wed 2025-10-29 00:41:16.470032 UTC. Oct 29 00:41:16.471525 systemd-resolved[1292]: Clock change detected. Flushing caches. Oct 29 00:41:16.473035 kernel: ACPI: button: Power Button [PWRF] Oct 29 00:41:16.475030 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 29 00:41:16.485971 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 29 00:41:16.509949 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 29 00:41:16.510341 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 29 00:41:16.513243 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 29 00:41:16.738156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:41:16.751241 kernel: kvm_amd: TSC scaling supported Oct 29 00:41:16.751299 kernel: kvm_amd: Nested Virtualization enabled Oct 29 00:41:16.751313 kernel: kvm_amd: Nested Paging enabled Oct 29 00:41:16.751342 kernel: kvm_amd: LBR virtualization supported Oct 29 00:41:16.754328 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 29 00:41:16.754413 kernel: kvm_amd: Virtual GIF supported Oct 29 00:41:16.764925 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:41:16.765605 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:41:16.771257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:41:16.843037 kernel: EDAC MC: Ver: 3.0.0 Oct 29 00:41:16.939043 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:41:16.974705 ldconfig[1445]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 00:41:16.983211 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 29 00:41:16.986763 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 29 00:41:17.031341 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 29 00:41:17.034295 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 00:41:17.036623 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 29 00:41:17.038757 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 29 00:41:17.040944 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 29 00:41:17.043081 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 29 00:41:17.045181 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 29 00:41:17.047264 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 29 00:41:17.049380 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 00:41:17.049429 systemd[1]: Reached target paths.target - Path Units. Oct 29 00:41:17.051295 systemd[1]: Reached target timers.target - Timer Units. Oct 29 00:41:17.054411 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 29 00:41:17.059122 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 29 00:41:17.063620 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 29 00:41:17.066109 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 29 00:41:17.068338 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 29 00:41:17.075371 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 29 00:41:17.077808 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 29 00:41:17.081695 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 29 00:41:17.085273 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 00:41:17.087035 systemd[1]: Reached target basic.target - Basic System. Oct 29 00:41:17.088766 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 29 00:41:17.088824 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 29 00:41:17.090249 systemd[1]: Starting containerd.service - containerd container runtime... Oct 29 00:41:17.094044 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 29 00:41:17.119479 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 29 00:41:17.122582 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 29 00:41:17.125402 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 29 00:41:17.127148 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 29 00:41:17.132653 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 29 00:41:17.135331 jq[1595]: false Oct 29 00:41:17.135938 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 29 00:41:17.141084 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 29 00:41:17.141963 oslogin_cache_refresh[1597]: Refreshing passwd entry cache Oct 29 00:41:17.144072 google_oslogin_nss_cache[1597]: oslogin_cache_refresh[1597]: Refreshing passwd entry cache Oct 29 00:41:17.144272 extend-filesystems[1596]: Found /dev/vda6 Oct 29 00:41:17.145765 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 29 00:41:17.149614 extend-filesystems[1596]: Found /dev/vda9 Oct 29 00:41:17.149614 extend-filesystems[1596]: Checking size of /dev/vda9 Oct 29 00:41:17.151345 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 29 00:41:17.153073 oslogin_cache_refresh[1597]: Failure getting users, quitting Oct 29 00:41:17.158174 google_oslogin_nss_cache[1597]: oslogin_cache_refresh[1597]: Failure getting users, quitting Oct 29 00:41:17.158174 google_oslogin_nss_cache[1597]: oslogin_cache_refresh[1597]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 00:41:17.158174 google_oslogin_nss_cache[1597]: oslogin_cache_refresh[1597]: Refreshing group entry cache Oct 29 00:41:17.153093 oslogin_cache_refresh[1597]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 00:41:17.153143 oslogin_cache_refresh[1597]: Refreshing group entry cache Oct 29 00:41:17.159174 google_oslogin_nss_cache[1597]: oslogin_cache_refresh[1597]: Failure getting groups, quitting Oct 29 00:41:17.159174 google_oslogin_nss_cache[1597]: oslogin_cache_refresh[1597]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 00:41:17.159110 oslogin_cache_refresh[1597]: Failure getting groups, quitting Oct 29 00:41:17.159120 oslogin_cache_refresh[1597]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 00:41:17.159944 extend-filesystems[1596]: Resized partition /dev/vda9 Oct 29 00:41:17.162235 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 29 00:41:17.163907 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 00:41:17.164530 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 29 00:41:17.166201 systemd[1]: Starting update-engine.service - Update Engine... Oct 29 00:41:17.168697 extend-filesystems[1615]: resize2fs 1.47.3 (8-Jul-2025) Oct 29 00:41:17.170843 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 29 00:41:17.180099 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 29 00:41:17.179331 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 29 00:41:17.182417 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 00:41:17.183367 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 29 00:41:17.183732 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 29 00:41:17.184321 jq[1619]: true Oct 29 00:41:17.184848 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 29 00:41:17.187884 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 00:41:17.188758 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 29 00:41:17.194443 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 00:41:17.194689 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 29 00:41:17.214926 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 29 00:41:17.229139 jq[1631]: true Oct 29 00:41:17.233406 update_engine[1618]: I20251029 00:41:17.231391 1618 main.cc:92] Flatcar Update Engine starting Oct 29 00:41:17.244056 extend-filesystems[1615]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 29 00:41:17.244056 extend-filesystems[1615]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 29 00:41:17.244056 extend-filesystems[1615]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 29 00:41:17.240342 (ntainerd)[1632]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 29 00:41:17.261632 tar[1629]: linux-amd64/LICENSE Oct 29 00:41:17.261632 tar[1629]: linux-amd64/helm Oct 29 00:41:17.261838 extend-filesystems[1596]: Resized filesystem in /dev/vda9 Oct 29 00:41:17.247201 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 00:41:17.247488 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 29 00:41:17.307220 dbus-daemon[1593]: [system] SELinux support is enabled Oct 29 00:41:17.307473 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 29 00:41:17.310576 bash[1662]: Updated "/home/core/.ssh/authorized_keys" Oct 29 00:41:17.312382 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 29 00:41:17.315424 update_engine[1618]: I20251029 00:41:17.312920 1618 update_check_scheduler.cc:74] Next update check in 7m56s Oct 29 00:41:17.316346 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 29 00:41:17.316438 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 00:41:17.316469 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 29 00:41:17.316517 systemd-logind[1616]: Watching system buttons on /dev/input/event2 (Power Button) Oct 29 00:41:17.316539 systemd-logind[1616]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 29 00:41:17.317744 systemd-logind[1616]: New seat seat0. Oct 29 00:41:17.390623 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 00:41:17.390652 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 29 00:41:17.392912 systemd[1]: Started systemd-logind.service - User Login Management. Oct 29 00:41:17.394914 systemd[1]: Started update-engine.service - Update Engine. Oct 29 00:41:17.399678 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 29 00:41:17.630400 sshd_keygen[1623]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 00:41:17.649510 locksmithd[1664]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 00:41:17.670728 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 29 00:41:17.677191 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 29 00:41:17.699423 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 00:41:17.699746 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 29 00:41:17.704686 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 29 00:41:17.759231 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 29 00:41:17.764238 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 29 00:41:17.768140 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 29 00:41:17.772223 systemd[1]: Reached target getty.target - Login Prompts. Oct 29 00:41:17.815839 containerd[1632]: time="2025-10-29T00:41:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 29 00:41:17.816625 containerd[1632]: time="2025-10-29T00:41:17.816588887Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 29 00:41:17.879002 containerd[1632]: time="2025-10-29T00:41:17.878895624Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.104µs" Oct 29 00:41:17.879002 containerd[1632]: time="2025-10-29T00:41:17.878964794Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 29 00:41:17.879002 containerd[1632]: time="2025-10-29T00:41:17.879012674Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 29 00:41:17.879289 containerd[1632]: time="2025-10-29T00:41:17.879261982Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 29 00:41:17.879289 containerd[1632]: time="2025-10-29T00:41:17.879283021Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 29 00:41:17.879365 containerd[1632]: time="2025-10-29T00:41:17.879320371Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 00:41:17.879428 containerd[1632]: time="2025-10-29T00:41:17.879400962Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 00:41:17.879428 containerd[1632]: time="2025-10-29T00:41:17.879416491Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 00:41:17.879727 containerd[1632]: time="2025-10-29T00:41:17.879698811Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 00:41:17.879727 containerd[1632]: time="2025-10-29T00:41:17.879716604Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 00:41:17.879727 containerd[1632]: time="2025-10-29T00:41:17.879726783Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 00:41:17.879792 containerd[1632]: time="2025-10-29T00:41:17.879734878Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 29 00:41:17.879907 containerd[1632]: time="2025-10-29T00:41:17.879881664Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 29 00:41:17.880205 containerd[1632]: time="2025-10-29T00:41:17.880172890Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 00:41:17.880231 containerd[1632]: time="2025-10-29T00:41:17.880212995Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 00:41:17.880231 containerd[1632]: time="2025-10-29T00:41:17.880223805Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 29 00:41:17.880297 containerd[1632]: time="2025-10-29T00:41:17.880278307Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 29 00:41:17.880722 containerd[1632]: time="2025-10-29T00:41:17.880649253Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 29 00:41:17.880760 containerd[1632]: time="2025-10-29T00:41:17.880727810Z" level=info msg="metadata content store policy set" policy=shared Oct 29 00:41:17.889425 containerd[1632]: time="2025-10-29T00:41:17.889373665Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 29 00:41:17.889425 containerd[1632]: time="2025-10-29T00:41:17.889438346Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 29 00:41:17.889576 containerd[1632]: time="2025-10-29T00:41:17.889457121Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 29 00:41:17.889576 containerd[1632]: time="2025-10-29T00:41:17.889469985Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 29 00:41:17.889576 containerd[1632]: time="2025-10-29T00:41:17.889481697Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 29 00:41:17.889576 containerd[1632]: time="2025-10-29T00:41:17.889493820Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 29 00:41:17.889576 containerd[1632]: time="2025-10-29T00:41:17.889508868Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 29 00:41:17.889576 containerd[1632]: time="2025-10-29T00:41:17.889524768Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 29 00:41:17.889576 containerd[1632]: time="2025-10-29T00:41:17.889537712Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 29 00:41:17.889576 containerd[1632]: time="2025-10-29T00:41:17.889548332Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 29 00:41:17.889576 containerd[1632]: time="2025-10-29T00:41:17.889561226Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 29 00:41:17.889576 containerd[1632]: time="2025-10-29T00:41:17.889573579Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 29 00:41:17.889768 containerd[1632]: time="2025-10-29T00:41:17.889716077Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 29 00:41:17.889768 containerd[1632]: time="2025-10-29T00:41:17.889739931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 29 00:41:17.889768 containerd[1632]: time="2025-10-29T00:41:17.889757564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 29 00:41:17.889768 containerd[1632]: time="2025-10-29T00:41:17.889768385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 29 00:41:17.889838 containerd[1632]: time="2025-10-29T00:41:17.889779325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 29 00:41:17.889838 containerd[1632]: time="2025-10-29T00:41:17.889792139Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 29 00:41:17.889838 containerd[1632]: time="2025-10-29T00:41:17.889804843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 29 00:41:17.889838 containerd[1632]: time="2025-10-29T00:41:17.889818499Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 29 00:41:17.889838 containerd[1632]: time="2025-10-29T00:41:17.889829429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 29 00:41:17.889967 containerd[1632]: time="2025-10-29T00:41:17.889841372Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 29 00:41:17.889967 containerd[1632]: time="2025-10-29T00:41:17.889865978Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 29 00:41:17.890053 containerd[1632]: time="2025-10-29T00:41:17.890018033Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 29 00:41:17.890053 containerd[1632]: time="2025-10-29T00:41:17.890040736Z" level=info msg="Start snapshots syncer" Oct 29 00:41:17.890125 containerd[1632]: time="2025-10-29T00:41:17.890096200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 29 00:41:17.890475 containerd[1632]: time="2025-10-29T00:41:17.890414547Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 29 00:41:17.890708 containerd[1632]: time="2025-10-29T00:41:17.890496560Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 29 00:41:17.890708 containerd[1632]: time="2025-10-29T00:41:17.890613900Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 29 00:41:17.890759 containerd[1632]: time="2025-10-29T00:41:17.890719539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 29 00:41:17.890759 containerd[1632]: time="2025-10-29T00:41:17.890737993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 29 00:41:17.890800 containerd[1632]: time="2025-10-29T00:41:17.890750116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 29 00:41:17.890800 containerd[1632]: time="2025-10-29T00:41:17.890791824Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 29 00:41:17.890841 containerd[1632]: time="2025-10-29T00:41:17.890808265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 29 00:41:17.890841 containerd[1632]: time="2025-10-29T00:41:17.890819476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 29 00:41:17.890841 containerd[1632]: time="2025-10-29T00:41:17.890829665Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 29 00:41:17.890893 containerd[1632]: time="2025-10-29T00:41:17.890855093Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 29 00:41:17.890893 containerd[1632]: time="2025-10-29T00:41:17.890869169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 29 00:41:17.890893 containerd[1632]: time="2025-10-29T00:41:17.890879499Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 29 00:41:17.890949 containerd[1632]: time="2025-10-29T00:41:17.890916718Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 00:41:17.890949 containerd[1632]: time="2025-10-29T00:41:17.890934371Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 00:41:17.890949 containerd[1632]: time="2025-10-29T00:41:17.890942927Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 00:41:17.891032 containerd[1632]: time="2025-10-29T00:41:17.890963957Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 00:41:17.891032 containerd[1632]: time="2025-10-29T00:41:17.890972763Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 29 00:41:17.891032 containerd[1632]: time="2025-10-29T00:41:17.890984065Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 29 00:41:17.891032 containerd[1632]: time="2025-10-29T00:41:17.891013019Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 29 00:41:17.891116 containerd[1632]: time="2025-10-29T00:41:17.891039358Z" level=info msg="runtime interface created" Oct 29 00:41:17.891116 containerd[1632]: time="2025-10-29T00:41:17.891045580Z" level=info msg="created NRI interface" Oct 29 00:41:17.891116 containerd[1632]: time="2025-10-29T00:41:17.891067511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 29 00:41:17.891116 containerd[1632]: time="2025-10-29T00:41:17.891078892Z" level=info msg="Connect containerd service" Oct 29 00:41:17.891116 containerd[1632]: time="2025-10-29T00:41:17.891106073Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 29 00:41:17.891971 containerd[1632]: time="2025-10-29T00:41:17.891926722Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 00:41:18.004125 tar[1629]: linux-amd64/README.md Oct 29 00:41:18.106208 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 29 00:41:18.251633 containerd[1632]: time="2025-10-29T00:41:18.251466658Z" level=info msg="Start subscribing containerd event" Oct 29 00:41:18.251633 containerd[1632]: time="2025-10-29T00:41:18.251561296Z" level=info msg="Start recovering state" Oct 29 00:41:18.251765 containerd[1632]: time="2025-10-29T00:41:18.251754829Z" level=info msg="Start event monitor" Oct 29 00:41:18.251810 containerd[1632]: time="2025-10-29T00:41:18.251783693Z" level=info msg="Start cni network conf syncer for default" Oct 29 00:41:18.251810 containerd[1632]: time="2025-10-29T00:41:18.251800304Z" level=info msg="Start streaming server" Oct 29 00:41:18.252004 containerd[1632]: time="2025-10-29T00:41:18.251827495Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 29 00:41:18.252004 containerd[1632]: time="2025-10-29T00:41:18.251841341Z" level=info msg="runtime interface starting up..." Oct 29 00:41:18.252004 containerd[1632]: time="2025-10-29T00:41:18.251851550Z" level=info msg="starting plugins..." Oct 29 00:41:18.252004 containerd[1632]: time="2025-10-29T00:41:18.251879001Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 29 00:41:18.252004 containerd[1632]: time="2025-10-29T00:41:18.251959703Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 00:41:18.252108 containerd[1632]: time="2025-10-29T00:41:18.252053048Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 00:41:18.252163 containerd[1632]: time="2025-10-29T00:41:18.252146353Z" level=info msg="containerd successfully booted in 0.437080s" Oct 29 00:41:18.252347 systemd[1]: Started containerd.service - containerd container runtime. Oct 29 00:41:18.681261 systemd-networkd[1540]: eth0: Gained IPv6LL Oct 29 00:41:18.684649 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 29 00:41:18.687572 systemd[1]: Reached target network-online.target - Network is Online. Oct 29 00:41:18.691122 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 29 00:41:18.694747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:41:18.719400 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 29 00:41:18.751897 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 29 00:41:18.752519 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 29 00:41:18.755389 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 29 00:41:18.759159 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 29 00:41:19.872396 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 29 00:41:19.876529 systemd[1]: Started sshd@0-10.0.0.77:22-10.0.0.1:57642.service - OpenSSH per-connection server daemon (10.0.0.1:57642). Oct 29 00:41:20.071229 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 57642 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:41:20.074231 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:20.083487 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 29 00:41:20.113562 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 29 00:41:20.124824 systemd-logind[1616]: New session 1 of user core. Oct 29 00:41:20.146453 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 29 00:41:20.153231 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 29 00:41:20.191945 (systemd)[1733]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:41:20.194580 systemd-logind[1616]: New session c1 of user core. Oct 29 00:41:20.360683 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:20.363529 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 29 00:41:20.376348 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 00:41:20.418125 systemd[1733]: Queued start job for default target default.target. Oct 29 00:41:20.430448 systemd[1733]: Created slice app.slice - User Application Slice. Oct 29 00:41:20.430481 systemd[1733]: Reached target paths.target - Paths. Oct 29 00:41:20.430535 systemd[1733]: Reached target timers.target - Timers. Oct 29 00:41:20.432375 systemd[1733]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 29 00:41:20.447014 systemd[1733]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 29 00:41:20.447173 systemd[1733]: Reached target sockets.target - Sockets. Oct 29 00:41:20.447225 systemd[1733]: Reached target basic.target - Basic System. Oct 29 00:41:20.447277 systemd[1733]: Reached target default.target - Main User Target. Oct 29 00:41:20.447319 systemd[1733]: Startup finished in 243ms. Oct 29 00:41:20.447591 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 29 00:41:20.534757 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 29 00:41:20.537019 systemd[1]: Startup finished in 2.835s (kernel) + 6.195s (initrd) + 6.971s (userspace) = 16.002s. Oct 29 00:41:20.606162 systemd[1]: Started sshd@1-10.0.0.77:22-10.0.0.1:52022.service - OpenSSH per-connection server daemon (10.0.0.1:52022). Oct 29 00:41:20.673708 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 52022 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:41:20.675237 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:20.679684 systemd-logind[1616]: New session 2 of user core. Oct 29 00:41:20.738429 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 29 00:41:20.799059 sshd[1758]: Connection closed by 10.0.0.1 port 52022 Oct 29 00:41:20.799435 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Oct 29 00:41:20.812853 systemd[1]: sshd@1-10.0.0.77:22-10.0.0.1:52022.service: Deactivated successfully. Oct 29 00:41:20.814754 systemd[1]: session-2.scope: Deactivated successfully. Oct 29 00:41:20.815617 systemd-logind[1616]: Session 2 logged out. Waiting for processes to exit. Oct 29 00:41:20.818747 systemd[1]: Started sshd@2-10.0.0.77:22-10.0.0.1:52030.service - OpenSSH per-connection server daemon (10.0.0.1:52030). Oct 29 00:41:20.819678 systemd-logind[1616]: Removed session 2. Oct 29 00:41:20.979667 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 52030 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:41:20.981768 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:20.990927 systemd-logind[1616]: New session 3 of user core. Oct 29 00:41:21.002180 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 29 00:41:21.053974 sshd[1772]: Connection closed by 10.0.0.1 port 52030 Oct 29 00:41:21.054368 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Oct 29 00:41:21.064983 systemd[1]: sshd@2-10.0.0.77:22-10.0.0.1:52030.service: Deactivated successfully. Oct 29 00:41:21.067108 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 00:41:21.068030 systemd-logind[1616]: Session 3 logged out. Waiting for processes to exit. Oct 29 00:41:21.071029 systemd[1]: Started sshd@3-10.0.0.77:22-10.0.0.1:52032.service - OpenSSH per-connection server daemon (10.0.0.1:52032). Oct 29 00:41:21.071818 systemd-logind[1616]: Removed session 3. Oct 29 00:41:21.130153 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 52032 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:41:21.132115 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:21.137662 systemd-logind[1616]: New session 4 of user core. Oct 29 00:41:21.153440 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 29 00:41:21.203370 kubelet[1744]: E1029 00:41:21.203265 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:41:21.208332 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:41:21.208539 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:41:21.208932 systemd[1]: kubelet.service: Consumed 2.263s CPU time, 266.9M memory peak. Oct 29 00:41:21.210648 sshd[1782]: Connection closed by 10.0.0.1 port 52032 Oct 29 00:41:21.211065 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Oct 29 00:41:21.220797 systemd[1]: sshd@3-10.0.0.77:22-10.0.0.1:52032.service: Deactivated successfully. Oct 29 00:41:21.222697 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 00:41:21.223499 systemd-logind[1616]: Session 4 logged out. Waiting for processes to exit. Oct 29 00:41:21.226348 systemd[1]: Started sshd@4-10.0.0.77:22-10.0.0.1:52034.service - OpenSSH per-connection server daemon (10.0.0.1:52034). Oct 29 00:41:21.227125 systemd-logind[1616]: Removed session 4. Oct 29 00:41:21.284965 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 52034 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:41:21.286549 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:21.291104 systemd-logind[1616]: New session 5 of user core. Oct 29 00:41:21.301114 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 29 00:41:21.370239 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 29 00:41:21.370550 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:41:21.387956 sudo[1793]: pam_unix(sudo:session): session closed for user root Oct 29 00:41:21.390087 sshd[1792]: Connection closed by 10.0.0.1 port 52034 Oct 29 00:41:21.390451 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Oct 29 00:41:21.407624 systemd[1]: sshd@4-10.0.0.77:22-10.0.0.1:52034.service: Deactivated successfully. Oct 29 00:41:21.410523 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 00:41:21.411576 systemd-logind[1616]: Session 5 logged out. Waiting for processes to exit. Oct 29 00:41:21.417256 systemd[1]: Started sshd@5-10.0.0.77:22-10.0.0.1:52046.service - OpenSSH per-connection server daemon (10.0.0.1:52046). Oct 29 00:41:21.417830 systemd-logind[1616]: Removed session 5. Oct 29 00:41:21.495399 sshd[1799]: Accepted publickey for core from 10.0.0.1 port 52046 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:41:21.497800 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:21.504484 systemd-logind[1616]: New session 6 of user core. Oct 29 00:41:21.519322 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 29 00:41:21.576932 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 29 00:41:21.577286 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:41:21.586694 sudo[1804]: pam_unix(sudo:session): session closed for user root Oct 29 00:41:21.594799 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 29 00:41:21.595141 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:41:21.606361 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 00:41:21.661742 augenrules[1826]: No rules Oct 29 00:41:21.663684 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 00:41:21.664058 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 00:41:21.666198 sudo[1803]: pam_unix(sudo:session): session closed for user root Oct 29 00:41:21.668078 sshd[1802]: Connection closed by 10.0.0.1 port 52046 Oct 29 00:41:21.668497 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Oct 29 00:41:21.677937 systemd[1]: sshd@5-10.0.0.77:22-10.0.0.1:52046.service: Deactivated successfully. Oct 29 00:41:21.679864 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 00:41:21.680644 systemd-logind[1616]: Session 6 logged out. Waiting for processes to exit. Oct 29 00:41:21.683225 systemd[1]: Started sshd@6-10.0.0.77:22-10.0.0.1:52060.service - OpenSSH per-connection server daemon (10.0.0.1:52060). Oct 29 00:41:21.683728 systemd-logind[1616]: Removed session 6. Oct 29 00:41:21.745967 sshd[1835]: Accepted publickey for core from 10.0.0.1 port 52060 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:41:21.747501 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:21.752618 systemd-logind[1616]: New session 7 of user core. Oct 29 00:41:21.766197 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 29 00:41:21.822618 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 00:41:21.822987 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:41:22.289855 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 29 00:41:22.302320 (dockerd)[1859]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 29 00:41:22.750277 dockerd[1859]: time="2025-10-29T00:41:22.750189790Z" level=info msg="Starting up" Oct 29 00:41:22.751317 dockerd[1859]: time="2025-10-29T00:41:22.751269725Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 29 00:41:22.769026 dockerd[1859]: time="2025-10-29T00:41:22.768969769Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 29 00:41:23.329296 dockerd[1859]: time="2025-10-29T00:41:23.329224897Z" level=info msg="Loading containers: start." Oct 29 00:41:23.340033 kernel: Initializing XFRM netlink socket Oct 29 00:41:23.642597 systemd-networkd[1540]: docker0: Link UP Oct 29 00:41:23.646885 dockerd[1859]: time="2025-10-29T00:41:23.646836252Z" level=info msg="Loading containers: done." Oct 29 00:41:23.668622 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1928611821-merged.mount: Deactivated successfully. Oct 29 00:41:23.670877 dockerd[1859]: time="2025-10-29T00:41:23.670808018Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 00:41:23.671046 dockerd[1859]: time="2025-10-29T00:41:23.670926971Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 29 00:41:23.671090 dockerd[1859]: time="2025-10-29T00:41:23.671060271Z" level=info msg="Initializing buildkit" Oct 29 00:41:23.704581 dockerd[1859]: time="2025-10-29T00:41:23.704529287Z" level=info msg="Completed buildkit initialization" Oct 29 00:41:23.710982 dockerd[1859]: time="2025-10-29T00:41:23.710934549Z" level=info msg="Daemon has completed initialization" Oct 29 00:41:23.711119 dockerd[1859]: time="2025-10-29T00:41:23.711041931Z" level=info msg="API listen on /run/docker.sock" Oct 29 00:41:23.711362 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 29 00:41:25.149953 containerd[1632]: time="2025-10-29T00:41:25.149856582Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 29 00:41:26.234955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2568220146.mount: Deactivated successfully. Oct 29 00:41:28.852227 containerd[1632]: time="2025-10-29T00:41:28.852156095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:28.852791 containerd[1632]: time="2025-10-29T00:41:28.852717427Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 29 00:41:28.854002 containerd[1632]: time="2025-10-29T00:41:28.853965378Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:28.856461 containerd[1632]: time="2025-10-29T00:41:28.856431643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:28.857291 containerd[1632]: time="2025-10-29T00:41:28.857260407Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 3.707316442s" Oct 29 00:41:28.857365 containerd[1632]: time="2025-10-29T00:41:28.857296435Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 29 00:41:28.857911 containerd[1632]: time="2025-10-29T00:41:28.857886842Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 29 00:41:31.326228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 00:41:31.328615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:41:31.418342 containerd[1632]: time="2025-10-29T00:41:31.418279562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:31.657847 containerd[1632]: time="2025-10-29T00:41:31.657703407Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 29 00:41:31.802760 containerd[1632]: time="2025-10-29T00:41:31.802669821Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:31.806460 containerd[1632]: time="2025-10-29T00:41:31.806383866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:31.807396 containerd[1632]: time="2025-10-29T00:41:31.807354076Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.949440955s" Oct 29 00:41:31.807396 containerd[1632]: time="2025-10-29T00:41:31.807385304Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 29 00:41:31.811487 containerd[1632]: time="2025-10-29T00:41:31.811401677Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 29 00:41:31.856208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:31.875354 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 00:41:31.938598 kubelet[2150]: E1029 00:41:31.938443 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:41:31.945901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:41:31.946113 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:41:31.946514 systemd[1]: kubelet.service: Consumed 465ms CPU time, 110M memory peak. Oct 29 00:41:33.479255 containerd[1632]: time="2025-10-29T00:41:33.479184234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:33.480139 containerd[1632]: time="2025-10-29T00:41:33.480110962Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 29 00:41:33.481422 containerd[1632]: time="2025-10-29T00:41:33.481383549Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:33.484580 containerd[1632]: time="2025-10-29T00:41:33.484544687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:33.485474 containerd[1632]: time="2025-10-29T00:41:33.485441228Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.674011439s" Oct 29 00:41:33.485534 containerd[1632]: time="2025-10-29T00:41:33.485477176Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 29 00:41:33.486088 containerd[1632]: time="2025-10-29T00:41:33.486048337Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 29 00:41:35.787284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2982768637.mount: Deactivated successfully. Oct 29 00:41:36.355574 containerd[1632]: time="2025-10-29T00:41:36.355490415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:36.356492 containerd[1632]: time="2025-10-29T00:41:36.356457589Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 29 00:41:36.357633 containerd[1632]: time="2025-10-29T00:41:36.357569003Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:36.359569 containerd[1632]: time="2025-10-29T00:41:36.359528187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:36.360096 containerd[1632]: time="2025-10-29T00:41:36.360062569Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.873973977s" Oct 29 00:41:36.360096 containerd[1632]: time="2025-10-29T00:41:36.360091233Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 29 00:41:36.360836 containerd[1632]: time="2025-10-29T00:41:36.360592944Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 29 00:41:36.948495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2583423140.mount: Deactivated successfully. Oct 29 00:41:38.125965 containerd[1632]: time="2025-10-29T00:41:38.125893413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:38.126898 containerd[1632]: time="2025-10-29T00:41:38.126867750Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 29 00:41:38.128370 containerd[1632]: time="2025-10-29T00:41:38.128332417Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:38.130876 containerd[1632]: time="2025-10-29T00:41:38.130841592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:38.131706 containerd[1632]: time="2025-10-29T00:41:38.131637144Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.771000268s" Oct 29 00:41:38.131706 containerd[1632]: time="2025-10-29T00:41:38.131698689Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 29 00:41:38.132272 containerd[1632]: time="2025-10-29T00:41:38.132245114Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 29 00:41:38.574037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount301262742.mount: Deactivated successfully. Oct 29 00:41:38.579543 containerd[1632]: time="2025-10-29T00:41:38.579482384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:41:38.580204 containerd[1632]: time="2025-10-29T00:41:38.580158391Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 29 00:41:38.581358 containerd[1632]: time="2025-10-29T00:41:38.581310161Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:41:38.583952 containerd[1632]: time="2025-10-29T00:41:38.583897584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:41:38.584467 containerd[1632]: time="2025-10-29T00:41:38.584420885Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 452.147118ms" Oct 29 00:41:38.584467 containerd[1632]: time="2025-10-29T00:41:38.584468084Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 29 00:41:38.585147 containerd[1632]: time="2025-10-29T00:41:38.585103896Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 29 00:41:39.137837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount696737024.mount: Deactivated successfully. Oct 29 00:41:42.076139 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 29 00:41:42.078365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:41:42.116414 containerd[1632]: time="2025-10-29T00:41:42.116359681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:42.121902 containerd[1632]: time="2025-10-29T00:41:42.121859285Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 29 00:41:42.123206 containerd[1632]: time="2025-10-29T00:41:42.123177968Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:42.126452 containerd[1632]: time="2025-10-29T00:41:42.126401734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:42.127591 containerd[1632]: time="2025-10-29T00:41:42.127559244Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.542423639s" Oct 29 00:41:42.127591 containerd[1632]: time="2025-10-29T00:41:42.127593849Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 29 00:41:42.283672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:42.288057 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 00:41:42.525620 kubelet[2296]: E1029 00:41:42.525445 2296 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:41:42.530944 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:41:42.531300 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:41:42.531866 systemd[1]: kubelet.service: Consumed 260ms CPU time, 109.2M memory peak. Oct 29 00:41:45.556146 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:45.556320 systemd[1]: kubelet.service: Consumed 260ms CPU time, 109.2M memory peak. Oct 29 00:41:45.558576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:41:45.584337 systemd[1]: Reload requested from client PID 2329 ('systemctl') (unit session-7.scope)... Oct 29 00:41:45.584360 systemd[1]: Reloading... Oct 29 00:41:45.657763 zram_generator::config[2372]: No configuration found. Oct 29 00:41:45.926698 systemd[1]: Reloading finished in 341 ms. Oct 29 00:41:45.997809 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 29 00:41:45.997924 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 29 00:41:45.998275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:45.998337 systemd[1]: kubelet.service: Consumed 157ms CPU time, 98.2M memory peak. Oct 29 00:41:46.000068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:41:46.184357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:46.193306 (kubelet)[2420]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 00:41:46.233276 kubelet[2420]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:41:46.233276 kubelet[2420]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 00:41:46.233276 kubelet[2420]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:41:46.233702 kubelet[2420]: I1029 00:41:46.233338 2420 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 00:41:46.545296 kubelet[2420]: I1029 00:41:46.545157 2420 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 29 00:41:46.545296 kubelet[2420]: I1029 00:41:46.545205 2420 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 00:41:46.545540 kubelet[2420]: I1029 00:41:46.545508 2420 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 00:41:46.567498 kubelet[2420]: I1029 00:41:46.567430 2420 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 00:41:46.570865 kubelet[2420]: E1029 00:41:46.569523 2420 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 29 00:41:46.575536 kubelet[2420]: I1029 00:41:46.575512 2420 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 00:41:46.582305 kubelet[2420]: I1029 00:41:46.582235 2420 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 00:41:46.582659 kubelet[2420]: I1029 00:41:46.582603 2420 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 00:41:46.582849 kubelet[2420]: I1029 00:41:46.582644 2420 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 00:41:46.582955 kubelet[2420]: I1029 00:41:46.582864 2420 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 00:41:46.582955 kubelet[2420]: I1029 00:41:46.582876 2420 container_manager_linux.go:303] "Creating device plugin manager" Oct 29 00:41:46.583114 kubelet[2420]: I1029 00:41:46.583095 2420 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:41:46.584756 kubelet[2420]: I1029 00:41:46.584728 2420 kubelet.go:480] "Attempting to sync node with API server" Oct 29 00:41:46.584756 kubelet[2420]: I1029 00:41:46.584750 2420 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 00:41:46.584830 kubelet[2420]: I1029 00:41:46.584793 2420 kubelet.go:386] "Adding apiserver pod source" Oct 29 00:41:46.584830 kubelet[2420]: I1029 00:41:46.584818 2420 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 00:41:46.589559 kubelet[2420]: I1029 00:41:46.589523 2420 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 00:41:46.590077 kubelet[2420]: I1029 00:41:46.590055 2420 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 00:41:46.590543 kubelet[2420]: E1029 00:41:46.590502 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 00:41:46.590890 kubelet[2420]: E1029 00:41:46.590860 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 00:41:46.591120 kubelet[2420]: W1029 00:41:46.591090 2420 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 00:41:46.593728 kubelet[2420]: I1029 00:41:46.593697 2420 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 00:41:46.593773 kubelet[2420]: I1029 00:41:46.593767 2420 server.go:1289] "Started kubelet" Oct 29 00:41:46.595202 kubelet[2420]: I1029 00:41:46.595091 2420 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 00:41:46.596346 kubelet[2420]: I1029 00:41:46.596247 2420 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 00:41:46.597309 kubelet[2420]: I1029 00:41:46.596607 2420 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 00:41:46.597309 kubelet[2420]: I1029 00:41:46.596671 2420 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 00:41:46.597756 kubelet[2420]: I1029 00:41:46.597710 2420 server.go:317] "Adding debug handlers to kubelet server" Oct 29 00:41:46.598522 kubelet[2420]: I1029 00:41:46.598497 2420 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 00:41:46.599561 kubelet[2420]: E1029 00:41:46.599197 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:46.599561 kubelet[2420]: I1029 00:41:46.599240 2420 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 00:41:46.599561 kubelet[2420]: I1029 00:41:46.599467 2420 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 00:41:46.599561 kubelet[2420]: I1029 00:41:46.599535 2420 reconciler.go:26] "Reconciler: start to sync state" Oct 29 00:41:46.599942 kubelet[2420]: E1029 00:41:46.599910 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 00:41:46.600157 kubelet[2420]: E1029 00:41:46.600073 2420 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 00:41:46.600157 kubelet[2420]: E1029 00:41:46.598426 2420 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872cf7536dad52f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 00:41:46.593719599 +0000 UTC m=+0.395604591,LastTimestamp:2025-10-29 00:41:46.593719599 +0000 UTC m=+0.395604591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 00:41:46.600558 kubelet[2420]: I1029 00:41:46.600347 2420 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 00:41:46.601109 kubelet[2420]: E1029 00:41:46.601061 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="200ms" Oct 29 00:41:46.605793 kubelet[2420]: I1029 00:41:46.605758 2420 factory.go:223] Registration of the containerd container factory successfully Oct 29 00:41:46.606662 kubelet[2420]: I1029 00:41:46.605966 2420 factory.go:223] Registration of the systemd container factory successfully Oct 29 00:41:46.621706 kubelet[2420]: I1029 00:41:46.621671 2420 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 00:41:46.621706 kubelet[2420]: I1029 00:41:46.621693 2420 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 00:41:46.621706 kubelet[2420]: I1029 00:41:46.621716 2420 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:41:46.625871 kubelet[2420]: I1029 00:41:46.625839 2420 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 29 00:41:46.627537 kubelet[2420]: I1029 00:41:46.627518 2420 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 29 00:41:46.627600 kubelet[2420]: I1029 00:41:46.627555 2420 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 29 00:41:46.627600 kubelet[2420]: I1029 00:41:46.627582 2420 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 00:41:46.627600 kubelet[2420]: I1029 00:41:46.627594 2420 kubelet.go:2436] "Starting kubelet main sync loop" Oct 29 00:41:46.627691 kubelet[2420]: E1029 00:41:46.627638 2420 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 00:41:46.628270 kubelet[2420]: E1029 00:41:46.628233 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 00:41:46.699614 kubelet[2420]: E1029 00:41:46.699502 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:46.728508 kubelet[2420]: E1029 00:41:46.728464 2420 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 00:41:46.799761 kubelet[2420]: E1029 00:41:46.799668 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:46.802281 kubelet[2420]: E1029 00:41:46.802252 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="400ms" Oct 29 00:41:46.900570 kubelet[2420]: E1029 00:41:46.900489 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:46.929118 kubelet[2420]: E1029 00:41:46.929030 2420 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 00:41:47.001678 kubelet[2420]: E1029 00:41:47.001574 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:47.101881 kubelet[2420]: E1029 00:41:47.101800 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:47.202619 kubelet[2420]: E1029 00:41:47.202533 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:47.203262 kubelet[2420]: E1029 00:41:47.203204 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="800ms" Oct 29 00:41:47.303749 kubelet[2420]: E1029 00:41:47.303658 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:47.330028 kubelet[2420]: E1029 00:41:47.329952 2420 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 00:41:47.404618 kubelet[2420]: E1029 00:41:47.404448 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:47.451811 kubelet[2420]: I1029 00:41:47.451718 2420 policy_none.go:49] "None policy: Start" Oct 29 00:41:47.451811 kubelet[2420]: I1029 00:41:47.451792 2420 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 00:41:47.451811 kubelet[2420]: I1029 00:41:47.451827 2420 state_mem.go:35] "Initializing new in-memory state store" Oct 29 00:41:47.460764 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 29 00:41:47.487389 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 29 00:41:47.491192 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 29 00:41:47.504648 kubelet[2420]: E1029 00:41:47.504569 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:47.513560 kubelet[2420]: E1029 00:41:47.513522 2420 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 00:41:47.513789 kubelet[2420]: I1029 00:41:47.513769 2420 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 00:41:47.513863 kubelet[2420]: I1029 00:41:47.513790 2420 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 00:41:47.515073 kubelet[2420]: I1029 00:41:47.514178 2420 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 00:41:47.515286 kubelet[2420]: E1029 00:41:47.515261 2420 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 00:41:47.515409 kubelet[2420]: E1029 00:41:47.515313 2420 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 29 00:41:47.616449 kubelet[2420]: I1029 00:41:47.616381 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:41:47.616717 kubelet[2420]: E1029 00:41:47.616682 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Oct 29 00:41:47.819052 kubelet[2420]: I1029 00:41:47.818862 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:41:47.819412 kubelet[2420]: E1029 00:41:47.819330 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Oct 29 00:41:47.862244 kubelet[2420]: E1029 00:41:47.862192 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 00:41:48.004311 kubelet[2420]: E1029 00:41:48.004227 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="1.6s" Oct 29 00:41:48.015627 kubelet[2420]: E1029 00:41:48.015551 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 00:41:48.036726 kubelet[2420]: E1029 00:41:48.036649 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 00:41:48.157625 systemd[1]: Created slice kubepods-burstable-podd01e309d29d45bf2cde3d6e1a2ef993a.slice - libcontainer container kubepods-burstable-podd01e309d29d45bf2cde3d6e1a2ef993a.slice. Oct 29 00:41:48.166963 kubelet[2420]: E1029 00:41:48.166921 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:41:48.169282 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 29 00:41:48.178165 kubelet[2420]: E1029 00:41:48.178135 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 00:41:48.184827 kubelet[2420]: E1029 00:41:48.184783 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:41:48.188976 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 29 00:41:48.191268 kubelet[2420]: E1029 00:41:48.191228 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:41:48.209619 kubelet[2420]: I1029 00:41:48.209586 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:48.209700 kubelet[2420]: I1029 00:41:48.209620 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:48.209700 kubelet[2420]: I1029 00:41:48.209641 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:48.209700 kubelet[2420]: I1029 00:41:48.209658 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:48.209700 kubelet[2420]: I1029 00:41:48.209681 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d01e309d29d45bf2cde3d6e1a2ef993a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d01e309d29d45bf2cde3d6e1a2ef993a\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:48.209803 kubelet[2420]: I1029 00:41:48.209731 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:48.209803 kubelet[2420]: I1029 00:41:48.209768 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:48.209847 kubelet[2420]: I1029 00:41:48.209799 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d01e309d29d45bf2cde3d6e1a2ef993a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d01e309d29d45bf2cde3d6e1a2ef993a\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:48.209888 kubelet[2420]: I1029 00:41:48.209857 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d01e309d29d45bf2cde3d6e1a2ef993a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d01e309d29d45bf2cde3d6e1a2ef993a\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:48.221049 kubelet[2420]: I1029 00:41:48.220984 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:41:48.221421 kubelet[2420]: E1029 00:41:48.221390 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Oct 29 00:41:48.468059 kubelet[2420]: E1029 00:41:48.467829 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:48.468910 containerd[1632]: time="2025-10-29T00:41:48.468839659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d01e309d29d45bf2cde3d6e1a2ef993a,Namespace:kube-system,Attempt:0,}" Oct 29 00:41:48.486238 kubelet[2420]: E1029 00:41:48.486200 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:48.486805 containerd[1632]: time="2025-10-29T00:41:48.486756539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 29 00:41:48.492138 kubelet[2420]: E1029 00:41:48.491922 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:48.492389 containerd[1632]: time="2025-10-29T00:41:48.492357142Z" level=info msg="connecting to shim b48a169b9b8f012da8702011de3661329061d729b156d3b6fe7474d619bd05a7" address="unix:///run/containerd/s/f6276d128439cb711f5e267f58034855b9e044d23e9731d8e39fb0bedbffe927" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:48.492442 containerd[1632]: time="2025-10-29T00:41:48.492425380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 29 00:41:48.531715 containerd[1632]: time="2025-10-29T00:41:48.531651904Z" level=info msg="connecting to shim 4fad3c6867446dda656358a04fd2f761400bd616030ff4d1bed96afd2fbf0d34" address="unix:///run/containerd/s/655f6409cc3c5b2080835ffd88f4bace4ae15a732232dcfc74c6bd4f7901075f" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:48.532240 systemd[1]: Started cri-containerd-b48a169b9b8f012da8702011de3661329061d729b156d3b6fe7474d619bd05a7.scope - libcontainer container b48a169b9b8f012da8702011de3661329061d729b156d3b6fe7474d619bd05a7. Oct 29 00:41:48.546623 containerd[1632]: time="2025-10-29T00:41:48.546556125Z" level=info msg="connecting to shim b9ab3307402eaee618ab92d8f5b1991db054d12b39d57f63bd1900abd6961ddd" address="unix:///run/containerd/s/fc659cb86d7f6662e1b68d2d1f2deed16ecdcddbe5ac5f2c0ca5d10172b0a971" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:48.595184 systemd[1]: Started cri-containerd-b9ab3307402eaee618ab92d8f5b1991db054d12b39d57f63bd1900abd6961ddd.scope - libcontainer container b9ab3307402eaee618ab92d8f5b1991db054d12b39d57f63bd1900abd6961ddd. Oct 29 00:41:48.600925 systemd[1]: Started cri-containerd-4fad3c6867446dda656358a04fd2f761400bd616030ff4d1bed96afd2fbf0d34.scope - libcontainer container 4fad3c6867446dda656358a04fd2f761400bd616030ff4d1bed96afd2fbf0d34. Oct 29 00:41:48.617272 containerd[1632]: time="2025-10-29T00:41:48.617230524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d01e309d29d45bf2cde3d6e1a2ef993a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b48a169b9b8f012da8702011de3661329061d729b156d3b6fe7474d619bd05a7\"" Oct 29 00:41:48.618391 kubelet[2420]: E1029 00:41:48.618187 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:48.626826 containerd[1632]: time="2025-10-29T00:41:48.626787177Z" level=info msg="CreateContainer within sandbox \"b48a169b9b8f012da8702011de3661329061d729b156d3b6fe7474d619bd05a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 00:41:48.637016 containerd[1632]: time="2025-10-29T00:41:48.636875255Z" level=info msg="Container 9e89a82576ce2737feb9a69d4538886471bc7f5ff6527601a5e1167612924645: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:41:48.647528 containerd[1632]: time="2025-10-29T00:41:48.647461649Z" level=info msg="CreateContainer within sandbox \"b48a169b9b8f012da8702011de3661329061d729b156d3b6fe7474d619bd05a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9e89a82576ce2737feb9a69d4538886471bc7f5ff6527601a5e1167612924645\"" Oct 29 00:41:48.652372 containerd[1632]: time="2025-10-29T00:41:48.652061796Z" level=info msg="StartContainer for \"9e89a82576ce2737feb9a69d4538886471bc7f5ff6527601a5e1167612924645\"" Oct 29 00:41:48.656129 containerd[1632]: time="2025-10-29T00:41:48.656059162Z" level=info msg="connecting to shim 9e89a82576ce2737feb9a69d4538886471bc7f5ff6527601a5e1167612924645" address="unix:///run/containerd/s/f6276d128439cb711f5e267f58034855b9e044d23e9731d8e39fb0bedbffe927" protocol=ttrpc version=3 Oct 29 00:41:48.664975 containerd[1632]: time="2025-10-29T00:41:48.664928325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fad3c6867446dda656358a04fd2f761400bd616030ff4d1bed96afd2fbf0d34\"" Oct 29 00:41:48.665637 kubelet[2420]: E1029 00:41:48.665602 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:48.670101 containerd[1632]: time="2025-10-29T00:41:48.669695515Z" level=info msg="CreateContainer within sandbox \"4fad3c6867446dda656358a04fd2f761400bd616030ff4d1bed96afd2fbf0d34\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 00:41:48.670101 containerd[1632]: time="2025-10-29T00:41:48.669731342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9ab3307402eaee618ab92d8f5b1991db054d12b39d57f63bd1900abd6961ddd\"" Oct 29 00:41:48.670405 kubelet[2420]: E1029 00:41:48.670385 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:48.675023 containerd[1632]: time="2025-10-29T00:41:48.674592689Z" level=info msg="CreateContainer within sandbox \"b9ab3307402eaee618ab92d8f5b1991db054d12b39d57f63bd1900abd6961ddd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 00:41:48.682694 containerd[1632]: time="2025-10-29T00:41:48.682664827Z" level=info msg="Container 6005ef4ceedb1945f56bf413092168dffe7cba794b0d2b003b6aba27ce72e15c: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:41:48.683146 systemd[1]: Started cri-containerd-9e89a82576ce2737feb9a69d4538886471bc7f5ff6527601a5e1167612924645.scope - libcontainer container 9e89a82576ce2737feb9a69d4538886471bc7f5ff6527601a5e1167612924645. Oct 29 00:41:48.691523 containerd[1632]: time="2025-10-29T00:41:48.691471312Z" level=info msg="Container a101a7716226120e59c4837595482d3a56f414f15d4756ae8550ec5757cf153a: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:41:48.696268 containerd[1632]: time="2025-10-29T00:41:48.696224136Z" level=info msg="CreateContainer within sandbox \"b9ab3307402eaee618ab92d8f5b1991db054d12b39d57f63bd1900abd6961ddd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6005ef4ceedb1945f56bf413092168dffe7cba794b0d2b003b6aba27ce72e15c\"" Oct 29 00:41:48.696733 containerd[1632]: time="2025-10-29T00:41:48.696709336Z" level=info msg="StartContainer for \"6005ef4ceedb1945f56bf413092168dffe7cba794b0d2b003b6aba27ce72e15c\"" Oct 29 00:41:48.698032 containerd[1632]: time="2025-10-29T00:41:48.697980029Z" level=info msg="connecting to shim 6005ef4ceedb1945f56bf413092168dffe7cba794b0d2b003b6aba27ce72e15c" address="unix:///run/containerd/s/fc659cb86d7f6662e1b68d2d1f2deed16ecdcddbe5ac5f2c0ca5d10172b0a971" protocol=ttrpc version=3 Oct 29 00:41:48.702964 containerd[1632]: time="2025-10-29T00:41:48.702895066Z" level=info msg="CreateContainer within sandbox \"4fad3c6867446dda656358a04fd2f761400bd616030ff4d1bed96afd2fbf0d34\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a101a7716226120e59c4837595482d3a56f414f15d4756ae8550ec5757cf153a\"" Oct 29 00:41:48.705125 containerd[1632]: time="2025-10-29T00:41:48.705088760Z" level=info msg="StartContainer for \"a101a7716226120e59c4837595482d3a56f414f15d4756ae8550ec5757cf153a\"" Oct 29 00:41:48.707096 containerd[1632]: time="2025-10-29T00:41:48.707031954Z" level=info msg="connecting to shim a101a7716226120e59c4837595482d3a56f414f15d4756ae8550ec5757cf153a" address="unix:///run/containerd/s/655f6409cc3c5b2080835ffd88f4bace4ae15a732232dcfc74c6bd4f7901075f" protocol=ttrpc version=3 Oct 29 00:41:48.728232 systemd[1]: Started cri-containerd-6005ef4ceedb1945f56bf413092168dffe7cba794b0d2b003b6aba27ce72e15c.scope - libcontainer container 6005ef4ceedb1945f56bf413092168dffe7cba794b0d2b003b6aba27ce72e15c. Oct 29 00:41:48.732962 systemd[1]: Started cri-containerd-a101a7716226120e59c4837595482d3a56f414f15d4756ae8550ec5757cf153a.scope - libcontainer container a101a7716226120e59c4837595482d3a56f414f15d4756ae8550ec5757cf153a. Oct 29 00:41:48.752117 kubelet[2420]: E1029 00:41:48.751625 2420 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 29 00:41:48.756120 containerd[1632]: time="2025-10-29T00:41:48.756052445Z" level=info msg="StartContainer for \"9e89a82576ce2737feb9a69d4538886471bc7f5ff6527601a5e1167612924645\" returns successfully" Oct 29 00:41:48.801168 containerd[1632]: time="2025-10-29T00:41:48.800463742Z" level=info msg="StartContainer for \"6005ef4ceedb1945f56bf413092168dffe7cba794b0d2b003b6aba27ce72e15c\" returns successfully" Oct 29 00:41:48.819845 containerd[1632]: time="2025-10-29T00:41:48.819769337Z" level=info msg="StartContainer for \"a101a7716226120e59c4837595482d3a56f414f15d4756ae8550ec5757cf153a\" returns successfully" Oct 29 00:41:49.024135 kubelet[2420]: I1029 00:41:49.023035 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:41:49.641464 kubelet[2420]: E1029 00:41:49.639663 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:41:49.641464 kubelet[2420]: E1029 00:41:49.639816 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:49.642855 kubelet[2420]: E1029 00:41:49.642826 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:41:49.642956 kubelet[2420]: E1029 00:41:49.642928 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:49.646060 kubelet[2420]: E1029 00:41:49.646034 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:41:49.646203 kubelet[2420]: E1029 00:41:49.646181 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:50.342488 kubelet[2420]: E1029 00:41:50.342434 2420 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 29 00:41:50.426438 kubelet[2420]: I1029 00:41:50.426386 2420 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 00:41:50.426438 kubelet[2420]: E1029 00:41:50.426432 2420 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 29 00:41:50.501134 kubelet[2420]: I1029 00:41:50.501046 2420 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:50.558650 kubelet[2420]: E1029 00:41:50.558599 2420 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:50.558650 kubelet[2420]: I1029 00:41:50.558642 2420 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:50.560254 kubelet[2420]: E1029 00:41:50.560231 2420 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:50.560254 kubelet[2420]: I1029 00:41:50.560253 2420 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:50.562023 kubelet[2420]: E1029 00:41:50.561964 2420 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:50.592582 kubelet[2420]: I1029 00:41:50.592471 2420 apiserver.go:52] "Watching apiserver" Oct 29 00:41:50.599850 kubelet[2420]: I1029 00:41:50.599825 2420 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 00:41:50.646404 kubelet[2420]: I1029 00:41:50.646343 2420 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:50.647241 kubelet[2420]: I1029 00:41:50.646465 2420 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:50.647241 kubelet[2420]: I1029 00:41:50.646673 2420 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:50.649696 kubelet[2420]: E1029 00:41:50.649638 2420 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:50.650057 kubelet[2420]: E1029 00:41:50.649929 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:50.650807 kubelet[2420]: E1029 00:41:50.650768 2420 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:50.651046 kubelet[2420]: E1029 00:41:50.650980 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:50.651046 kubelet[2420]: E1029 00:41:50.651039 2420 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:50.651326 kubelet[2420]: E1029 00:41:50.651225 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:51.648439 kubelet[2420]: I1029 00:41:51.648376 2420 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:51.655861 kubelet[2420]: E1029 00:41:51.655818 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:52.650060 kubelet[2420]: E1029 00:41:52.650028 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:52.719255 systemd[1]: Reload requested from client PID 2707 ('systemctl') (unit session-7.scope)... Oct 29 00:41:52.719272 systemd[1]: Reloading... Oct 29 00:41:52.811035 zram_generator::config[2758]: No configuration found. Oct 29 00:41:53.040136 systemd[1]: Reloading finished in 320 ms. Oct 29 00:41:53.072407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:41:53.088488 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 00:41:53.088830 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:53.088886 systemd[1]: kubelet.service: Consumed 928ms CPU time, 130.4M memory peak. Oct 29 00:41:53.091111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:41:53.324407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:53.335339 (kubelet)[2796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 00:41:53.383706 kubelet[2796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:41:53.383706 kubelet[2796]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 00:41:53.383706 kubelet[2796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:41:53.384147 kubelet[2796]: I1029 00:41:53.383770 2796 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 00:41:53.391606 kubelet[2796]: I1029 00:41:53.391562 2796 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 29 00:41:53.391606 kubelet[2796]: I1029 00:41:53.391596 2796 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 00:41:53.391812 kubelet[2796]: I1029 00:41:53.391789 2796 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 00:41:53.393374 kubelet[2796]: I1029 00:41:53.393347 2796 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 29 00:41:53.396115 kubelet[2796]: I1029 00:41:53.396077 2796 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 00:41:53.401547 kubelet[2796]: I1029 00:41:53.401522 2796 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 00:41:53.406819 kubelet[2796]: I1029 00:41:53.406784 2796 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 00:41:53.407096 kubelet[2796]: I1029 00:41:53.407071 2796 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 00:41:53.407265 kubelet[2796]: I1029 00:41:53.407094 2796 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 00:41:53.407349 kubelet[2796]: I1029 00:41:53.407276 2796 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 00:41:53.407349 kubelet[2796]: I1029 00:41:53.407286 2796 container_manager_linux.go:303] "Creating device plugin manager" Oct 29 00:41:53.407349 kubelet[2796]: I1029 00:41:53.407336 2796 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:41:53.407526 kubelet[2796]: I1029 00:41:53.407507 2796 kubelet.go:480] "Attempting to sync node with API server" Oct 29 00:41:53.407526 kubelet[2796]: I1029 00:41:53.407523 2796 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 00:41:53.407580 kubelet[2796]: I1029 00:41:53.407544 2796 kubelet.go:386] "Adding apiserver pod source" Oct 29 00:41:53.407580 kubelet[2796]: I1029 00:41:53.407567 2796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 00:41:53.410029 kubelet[2796]: I1029 00:41:53.409366 2796 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 00:41:53.410882 kubelet[2796]: I1029 00:41:53.410834 2796 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 00:41:53.416802 kubelet[2796]: I1029 00:41:53.416767 2796 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 00:41:53.416928 kubelet[2796]: I1029 00:41:53.416844 2796 server.go:1289] "Started kubelet" Oct 29 00:41:53.418371 kubelet[2796]: I1029 00:41:53.418350 2796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 00:41:53.418465 kubelet[2796]: I1029 00:41:53.418392 2796 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 00:41:53.419707 kubelet[2796]: I1029 00:41:53.418318 2796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 00:41:53.419707 kubelet[2796]: I1029 00:41:53.419616 2796 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 00:41:53.420830 kubelet[2796]: I1029 00:41:53.420750 2796 server.go:317] "Adding debug handlers to kubelet server" Oct 29 00:41:53.423332 kubelet[2796]: E1029 00:41:53.423303 2796 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:53.423383 kubelet[2796]: I1029 00:41:53.423348 2796 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 00:41:53.423534 kubelet[2796]: I1029 00:41:53.423510 2796 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 00:41:53.423672 kubelet[2796]: I1029 00:41:53.423651 2796 reconciler.go:26] "Reconciler: start to sync state" Oct 29 00:41:53.424624 kubelet[2796]: I1029 00:41:53.424577 2796 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 00:41:53.425258 kubelet[2796]: I1029 00:41:53.425196 2796 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 00:41:53.430886 kubelet[2796]: E1029 00:41:53.430812 2796 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 00:41:53.431959 kubelet[2796]: I1029 00:41:53.431933 2796 factory.go:223] Registration of the containerd container factory successfully Oct 29 00:41:53.431959 kubelet[2796]: I1029 00:41:53.431953 2796 factory.go:223] Registration of the systemd container factory successfully Oct 29 00:41:53.432717 kubelet[2796]: I1029 00:41:53.432680 2796 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 29 00:41:53.447224 kubelet[2796]: I1029 00:41:53.447176 2796 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 29 00:41:53.447224 kubelet[2796]: I1029 00:41:53.447208 2796 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 29 00:41:53.447224 kubelet[2796]: I1029 00:41:53.447229 2796 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 00:41:53.447417 kubelet[2796]: I1029 00:41:53.447238 2796 kubelet.go:2436] "Starting kubelet main sync loop" Oct 29 00:41:53.447417 kubelet[2796]: E1029 00:41:53.447286 2796 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 00:41:53.478362 kubelet[2796]: I1029 00:41:53.478317 2796 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 00:41:53.478362 kubelet[2796]: I1029 00:41:53.478336 2796 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 00:41:53.478362 kubelet[2796]: I1029 00:41:53.478358 2796 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:41:53.478619 kubelet[2796]: I1029 00:41:53.478505 2796 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 00:41:53.478619 kubelet[2796]: I1029 00:41:53.478516 2796 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 00:41:53.478619 kubelet[2796]: I1029 00:41:53.478532 2796 policy_none.go:49] "None policy: Start" Oct 29 00:41:53.478619 kubelet[2796]: I1029 00:41:53.478541 2796 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 00:41:53.478619 kubelet[2796]: I1029 00:41:53.478551 2796 state_mem.go:35] "Initializing new in-memory state store" Oct 29 00:41:53.478725 kubelet[2796]: I1029 00:41:53.478664 2796 state_mem.go:75] "Updated machine memory state" Oct 29 00:41:53.483189 kubelet[2796]: E1029 00:41:53.483136 2796 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 00:41:53.483418 kubelet[2796]: I1029 00:41:53.483397 2796 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 00:41:53.483450 kubelet[2796]: I1029 00:41:53.483416 2796 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 00:41:53.484041 kubelet[2796]: I1029 00:41:53.484020 2796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 00:41:53.485034 kubelet[2796]: E1029 00:41:53.485016 2796 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 00:41:53.549094 kubelet[2796]: I1029 00:41:53.549019 2796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:53.549348 kubelet[2796]: I1029 00:41:53.549115 2796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:53.549348 kubelet[2796]: I1029 00:41:53.549304 2796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:53.556904 kubelet[2796]: E1029 00:41:53.556866 2796 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:53.588138 kubelet[2796]: I1029 00:41:53.588106 2796 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:41:53.594878 kubelet[2796]: I1029 00:41:53.594849 2796 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 29 00:41:53.594949 kubelet[2796]: I1029 00:41:53.594940 2796 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 00:41:53.724479 kubelet[2796]: I1029 00:41:53.724417 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:53.724479 kubelet[2796]: I1029 00:41:53.724468 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:53.724650 kubelet[2796]: I1029 00:41:53.724503 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:53.724650 kubelet[2796]: I1029 00:41:53.724528 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:53.724650 kubelet[2796]: I1029 00:41:53.724555 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:53.724650 kubelet[2796]: I1029 00:41:53.724588 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d01e309d29d45bf2cde3d6e1a2ef993a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d01e309d29d45bf2cde3d6e1a2ef993a\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:53.724650 kubelet[2796]: I1029 00:41:53.724611 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d01e309d29d45bf2cde3d6e1a2ef993a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d01e309d29d45bf2cde3d6e1a2ef993a\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:53.724762 kubelet[2796]: I1029 00:41:53.724639 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:53.724762 kubelet[2796]: I1029 00:41:53.724661 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d01e309d29d45bf2cde3d6e1a2ef993a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d01e309d29d45bf2cde3d6e1a2ef993a\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:53.856431 kubelet[2796]: E1029 00:41:53.856260 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:53.857107 kubelet[2796]: E1029 00:41:53.857064 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:53.857281 kubelet[2796]: E1029 00:41:53.857239 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:54.409005 kubelet[2796]: I1029 00:41:54.408928 2796 apiserver.go:52] "Watching apiserver" Oct 29 00:41:54.468367 kubelet[2796]: I1029 00:41:54.468309 2796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:54.469476 kubelet[2796]: I1029 00:41:54.468931 2796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:54.469476 kubelet[2796]: I1029 00:41:54.469193 2796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:54.478255 kubelet[2796]: E1029 00:41:54.478208 2796 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:54.478255 kubelet[2796]: E1029 00:41:54.478252 2796 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:54.478571 kubelet[2796]: E1029 00:41:54.478393 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:54.478571 kubelet[2796]: E1029 00:41:54.478445 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:54.479064 kubelet[2796]: E1029 00:41:54.479028 2796 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:54.479307 kubelet[2796]: E1029 00:41:54.479291 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:54.500098 kubelet[2796]: I1029 00:41:54.499986 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.4999419449999998 podStartE2EDuration="3.499941945s" podCreationTimestamp="2025-10-29 00:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:41:54.492874422 +0000 UTC m=+1.147322383" watchObservedRunningTime="2025-10-29 00:41:54.499941945 +0000 UTC m=+1.154389906" Oct 29 00:41:54.510197 kubelet[2796]: I1029 00:41:54.510086 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5100657549999998 podStartE2EDuration="1.510065755s" podCreationTimestamp="2025-10-29 00:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:41:54.510014196 +0000 UTC m=+1.164462157" watchObservedRunningTime="2025-10-29 00:41:54.510065755 +0000 UTC m=+1.164513737" Oct 29 00:41:54.510625 kubelet[2796]: I1029 00:41:54.510238 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5102300899999999 podStartE2EDuration="1.51023009s" podCreationTimestamp="2025-10-29 00:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:41:54.50156208 +0000 UTC m=+1.156010041" watchObservedRunningTime="2025-10-29 00:41:54.51023009 +0000 UTC m=+1.164678061" Oct 29 00:41:54.524623 kubelet[2796]: I1029 00:41:54.524574 2796 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 00:41:55.471022 kubelet[2796]: E1029 00:41:55.470935 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:55.471816 kubelet[2796]: E1029 00:41:55.471114 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:55.471816 kubelet[2796]: E1029 00:41:55.471229 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:56.780726 kubelet[2796]: E1029 00:41:56.780669 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:56.893697 kubelet[2796]: E1029 00:41:56.893328 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:59.757273 kubelet[2796]: I1029 00:41:59.757233 2796 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 00:41:59.758081 containerd[1632]: time="2025-10-29T00:41:59.758033799Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 00:41:59.758373 kubelet[2796]: I1029 00:41:59.758247 2796 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 00:42:00.349176 kubelet[2796]: E1029 00:42:00.349121 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:00.610431 kubelet[2796]: E1029 00:42:00.610106 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:00.620775 systemd[1]: Created slice kubepods-besteffort-pod71e2a5f3_d4bb_4236_846d_b68ab11bbd6c.slice - libcontainer container kubepods-besteffort-pod71e2a5f3_d4bb_4236_846d_b68ab11bbd6c.slice. Oct 29 00:42:00.666012 kubelet[2796]: I1029 00:42:00.665918 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/71e2a5f3-d4bb-4236-846d-b68ab11bbd6c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-hplqv\" (UID: \"71e2a5f3-d4bb-4236-846d-b68ab11bbd6c\") " pod="tigera-operator/tigera-operator-7dcd859c48-hplqv" Oct 29 00:42:00.666012 kubelet[2796]: I1029 00:42:00.665977 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbq6k\" (UniqueName: \"kubernetes.io/projected/71e2a5f3-d4bb-4236-846d-b68ab11bbd6c-kube-api-access-mbq6k\") pod \"tigera-operator-7dcd859c48-hplqv\" (UID: \"71e2a5f3-d4bb-4236-846d-b68ab11bbd6c\") " pod="tigera-operator/tigera-operator-7dcd859c48-hplqv" Oct 29 00:42:00.728616 systemd[1]: Created slice kubepods-besteffort-poda7563ba9_7fb7_4c7e_8f44_93a8b4858ce2.slice - libcontainer container kubepods-besteffort-poda7563ba9_7fb7_4c7e_8f44_93a8b4858ce2.slice. Oct 29 00:42:00.766639 kubelet[2796]: I1029 00:42:00.766574 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7563ba9-7fb7-4c7e-8f44-93a8b4858ce2-xtables-lock\") pod \"kube-proxy-hxhn7\" (UID: \"a7563ba9-7fb7-4c7e-8f44-93a8b4858ce2\") " pod="kube-system/kube-proxy-hxhn7" Oct 29 00:42:00.766639 kubelet[2796]: I1029 00:42:00.766647 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7563ba9-7fb7-4c7e-8f44-93a8b4858ce2-lib-modules\") pod \"kube-proxy-hxhn7\" (UID: \"a7563ba9-7fb7-4c7e-8f44-93a8b4858ce2\") " pod="kube-system/kube-proxy-hxhn7" Oct 29 00:42:00.767147 kubelet[2796]: I1029 00:42:00.766677 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m92h\" (UniqueName: \"kubernetes.io/projected/a7563ba9-7fb7-4c7e-8f44-93a8b4858ce2-kube-api-access-2m92h\") pod \"kube-proxy-hxhn7\" (UID: \"a7563ba9-7fb7-4c7e-8f44-93a8b4858ce2\") " pod="kube-system/kube-proxy-hxhn7" Oct 29 00:42:00.767147 kubelet[2796]: I1029 00:42:00.766721 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a7563ba9-7fb7-4c7e-8f44-93a8b4858ce2-kube-proxy\") pod \"kube-proxy-hxhn7\" (UID: \"a7563ba9-7fb7-4c7e-8f44-93a8b4858ce2\") " pod="kube-system/kube-proxy-hxhn7" Oct 29 00:42:00.933386 containerd[1632]: time="2025-10-29T00:42:00.933253435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hplqv,Uid:71e2a5f3-d4bb-4236-846d-b68ab11bbd6c,Namespace:tigera-operator,Attempt:0,}" Oct 29 00:42:01.031922 kubelet[2796]: E1029 00:42:01.031844 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:01.032516 containerd[1632]: time="2025-10-29T00:42:01.032457679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hxhn7,Uid:a7563ba9-7fb7-4c7e-8f44-93a8b4858ce2,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:01.185827 containerd[1632]: time="2025-10-29T00:42:01.185607779Z" level=info msg="connecting to shim 4050349d225c4bd991f60876cb1310670db6bd58b10bfb30d935d9e748ab2acb" address="unix:///run/containerd/s/3fb4d25423315c979e5209cdbcfb799165295e9b9b931458d7ed92801d022a5c" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:01.194790 containerd[1632]: time="2025-10-29T00:42:01.194318923Z" level=info msg="connecting to shim f9622b579f3e5a7fbf08da71e1bb2d0a4be67fa19531ba6928c0efcd3e81aa44" address="unix:///run/containerd/s/f976c433902a38382ad4d6b1c03d67a05fef53c62321f58f7db4cea2b3d1e798" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:01.247170 systemd[1]: Started cri-containerd-f9622b579f3e5a7fbf08da71e1bb2d0a4be67fa19531ba6928c0efcd3e81aa44.scope - libcontainer container f9622b579f3e5a7fbf08da71e1bb2d0a4be67fa19531ba6928c0efcd3e81aa44. Oct 29 00:42:01.250889 systemd[1]: Started cri-containerd-4050349d225c4bd991f60876cb1310670db6bd58b10bfb30d935d9e748ab2acb.scope - libcontainer container 4050349d225c4bd991f60876cb1310670db6bd58b10bfb30d935d9e748ab2acb. Oct 29 00:42:01.285816 containerd[1632]: time="2025-10-29T00:42:01.285733198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hxhn7,Uid:a7563ba9-7fb7-4c7e-8f44-93a8b4858ce2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4050349d225c4bd991f60876cb1310670db6bd58b10bfb30d935d9e748ab2acb\"" Oct 29 00:42:01.287296 kubelet[2796]: E1029 00:42:01.287269 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:01.294775 containerd[1632]: time="2025-10-29T00:42:01.294716197Z" level=info msg="CreateContainer within sandbox \"4050349d225c4bd991f60876cb1310670db6bd58b10bfb30d935d9e748ab2acb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 00:42:01.301910 containerd[1632]: time="2025-10-29T00:42:01.301860973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hplqv,Uid:71e2a5f3-d4bb-4236-846d-b68ab11bbd6c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f9622b579f3e5a7fbf08da71e1bb2d0a4be67fa19531ba6928c0efcd3e81aa44\"" Oct 29 00:42:01.303341 containerd[1632]: time="2025-10-29T00:42:01.303313204Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 29 00:42:01.309947 containerd[1632]: time="2025-10-29T00:42:01.309892804Z" level=info msg="Container 3a3b7e47b414ea69b781cc9b1ea643595c4e80eb2637d62e99cc1d0c3ea6d3a6: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:42:01.319114 containerd[1632]: time="2025-10-29T00:42:01.319059244Z" level=info msg="CreateContainer within sandbox \"4050349d225c4bd991f60876cb1310670db6bd58b10bfb30d935d9e748ab2acb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3a3b7e47b414ea69b781cc9b1ea643595c4e80eb2637d62e99cc1d0c3ea6d3a6\"" Oct 29 00:42:01.319685 containerd[1632]: time="2025-10-29T00:42:01.319657251Z" level=info msg="StartContainer for \"3a3b7e47b414ea69b781cc9b1ea643595c4e80eb2637d62e99cc1d0c3ea6d3a6\"" Oct 29 00:42:01.321116 containerd[1632]: time="2025-10-29T00:42:01.321073954Z" level=info msg="connecting to shim 3a3b7e47b414ea69b781cc9b1ea643595c4e80eb2637d62e99cc1d0c3ea6d3a6" address="unix:///run/containerd/s/3fb4d25423315c979e5209cdbcfb799165295e9b9b931458d7ed92801d022a5c" protocol=ttrpc version=3 Oct 29 00:42:01.344149 systemd[1]: Started cri-containerd-3a3b7e47b414ea69b781cc9b1ea643595c4e80eb2637d62e99cc1d0c3ea6d3a6.scope - libcontainer container 3a3b7e47b414ea69b781cc9b1ea643595c4e80eb2637d62e99cc1d0c3ea6d3a6. Oct 29 00:42:01.392930 containerd[1632]: time="2025-10-29T00:42:01.392879065Z" level=info msg="StartContainer for \"3a3b7e47b414ea69b781cc9b1ea643595c4e80eb2637d62e99cc1d0c3ea6d3a6\" returns successfully" Oct 29 00:42:01.482550 kubelet[2796]: E1029 00:42:01.482429 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:01.484780 kubelet[2796]: E1029 00:42:01.484742 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:01.492723 kubelet[2796]: I1029 00:42:01.492661 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hxhn7" podStartSLOduration=1.492646272 podStartE2EDuration="1.492646272s" podCreationTimestamp="2025-10-29 00:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:42:01.492323589 +0000 UTC m=+8.146771540" watchObservedRunningTime="2025-10-29 00:42:01.492646272 +0000 UTC m=+8.147094233" Oct 29 00:42:02.536377 update_engine[1618]: I20251029 00:42:02.536250 1618 update_attempter.cc:509] Updating boot flags... Oct 29 00:42:04.494936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3912222563.mount: Deactivated successfully. Oct 29 00:42:06.788506 kubelet[2796]: E1029 00:42:06.788465 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:06.897909 kubelet[2796]: E1029 00:42:06.897862 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:07.489871 containerd[1632]: time="2025-10-29T00:42:07.489808575Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:07.490567 containerd[1632]: time="2025-10-29T00:42:07.490533157Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 29 00:42:07.491785 containerd[1632]: time="2025-10-29T00:42:07.491719221Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:07.493869 containerd[1632]: time="2025-10-29T00:42:07.493813967Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:07.494427 containerd[1632]: time="2025-10-29T00:42:07.494383575Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 6.191038421s" Oct 29 00:42:07.494471 containerd[1632]: time="2025-10-29T00:42:07.494429633Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 29 00:42:07.498653 containerd[1632]: time="2025-10-29T00:42:07.498628240Z" level=info msg="CreateContainer within sandbox \"f9622b579f3e5a7fbf08da71e1bb2d0a4be67fa19531ba6928c0efcd3e81aa44\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 29 00:42:07.511850 containerd[1632]: time="2025-10-29T00:42:07.511807769Z" level=info msg="Container 17ba6cd90824498ef164811931c55d2088a75d4a1170c102df5811e52fce4eb8: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:42:07.515470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3928895383.mount: Deactivated successfully. Oct 29 00:42:07.519447 containerd[1632]: time="2025-10-29T00:42:07.519412165Z" level=info msg="CreateContainer within sandbox \"f9622b579f3e5a7fbf08da71e1bb2d0a4be67fa19531ba6928c0efcd3e81aa44\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"17ba6cd90824498ef164811931c55d2088a75d4a1170c102df5811e52fce4eb8\"" Oct 29 00:42:07.519880 containerd[1632]: time="2025-10-29T00:42:07.519848181Z" level=info msg="StartContainer for \"17ba6cd90824498ef164811931c55d2088a75d4a1170c102df5811e52fce4eb8\"" Oct 29 00:42:07.520846 containerd[1632]: time="2025-10-29T00:42:07.520811655Z" level=info msg="connecting to shim 17ba6cd90824498ef164811931c55d2088a75d4a1170c102df5811e52fce4eb8" address="unix:///run/containerd/s/f976c433902a38382ad4d6b1c03d67a05fef53c62321f58f7db4cea2b3d1e798" protocol=ttrpc version=3 Oct 29 00:42:07.546230 systemd[1]: Started cri-containerd-17ba6cd90824498ef164811931c55d2088a75d4a1170c102df5811e52fce4eb8.scope - libcontainer container 17ba6cd90824498ef164811931c55d2088a75d4a1170c102df5811e52fce4eb8. Oct 29 00:42:07.761589 containerd[1632]: time="2025-10-29T00:42:07.761457637Z" level=info msg="StartContainer for \"17ba6cd90824498ef164811931c55d2088a75d4a1170c102df5811e52fce4eb8\" returns successfully" Oct 29 00:42:13.214637 sudo[1839]: pam_unix(sudo:session): session closed for user root Oct 29 00:42:13.219019 sshd[1838]: Connection closed by 10.0.0.1 port 52060 Oct 29 00:42:13.221202 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:13.235901 systemd[1]: sshd@6-10.0.0.77:22-10.0.0.1:52060.service: Deactivated successfully. Oct 29 00:42:13.237458 systemd-logind[1616]: Session 7 logged out. Waiting for processes to exit. Oct 29 00:42:13.239149 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 00:42:13.239381 systemd[1]: session-7.scope: Consumed 5.985s CPU time, 215.3M memory peak. Oct 29 00:42:13.243037 systemd-logind[1616]: Removed session 7. Oct 29 00:42:18.112345 kubelet[2796]: I1029 00:42:18.112124 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-hplqv" podStartSLOduration=11.918439483 podStartE2EDuration="18.110699221s" podCreationTimestamp="2025-10-29 00:42:00 +0000 UTC" firstStartedPulling="2025-10-29 00:42:01.302799587 +0000 UTC m=+7.957247548" lastFinishedPulling="2025-10-29 00:42:07.495059325 +0000 UTC m=+14.149507286" observedRunningTime="2025-10-29 00:42:08.503896667 +0000 UTC m=+15.158344618" watchObservedRunningTime="2025-10-29 00:42:18.110699221 +0000 UTC m=+24.765147182" Oct 29 00:42:18.128467 systemd[1]: Created slice kubepods-besteffort-pod6f4ddee3_61c8_4e2d_9012_e46c09d58976.slice - libcontainer container kubepods-besteffort-pod6f4ddee3_61c8_4e2d_9012_e46c09d58976.slice. Oct 29 00:42:18.181852 kubelet[2796]: I1029 00:42:18.181763 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wddn\" (UniqueName: \"kubernetes.io/projected/6f4ddee3-61c8-4e2d-9012-e46c09d58976-kube-api-access-8wddn\") pod \"calico-typha-67d58876dd-zztgd\" (UID: \"6f4ddee3-61c8-4e2d-9012-e46c09d58976\") " pod="calico-system/calico-typha-67d58876dd-zztgd" Oct 29 00:42:18.181852 kubelet[2796]: I1029 00:42:18.181826 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f4ddee3-61c8-4e2d-9012-e46c09d58976-tigera-ca-bundle\") pod \"calico-typha-67d58876dd-zztgd\" (UID: \"6f4ddee3-61c8-4e2d-9012-e46c09d58976\") " pod="calico-system/calico-typha-67d58876dd-zztgd" Oct 29 00:42:18.181852 kubelet[2796]: I1029 00:42:18.181849 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6f4ddee3-61c8-4e2d-9012-e46c09d58976-typha-certs\") pod \"calico-typha-67d58876dd-zztgd\" (UID: \"6f4ddee3-61c8-4e2d-9012-e46c09d58976\") " pod="calico-system/calico-typha-67d58876dd-zztgd" Oct 29 00:42:18.736868 kubelet[2796]: E1029 00:42:18.736809 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:18.737613 containerd[1632]: time="2025-10-29T00:42:18.737554249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67d58876dd-zztgd,Uid:6f4ddee3-61c8-4e2d-9012-e46c09d58976,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:19.257662 systemd[1]: Created slice kubepods-besteffort-pod9dc4ef66_b051_4a99_b265_124e0920fea2.slice - libcontainer container kubepods-besteffort-pod9dc4ef66_b051_4a99_b265_124e0920fea2.slice. Oct 29 00:42:19.288965 kubelet[2796]: I1029 00:42:19.288926 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9dc4ef66-b051-4a99-b265-124e0920fea2-flexvol-driver-host\") pod \"calico-node-vlhss\" (UID: \"9dc4ef66-b051-4a99-b265-124e0920fea2\") " pod="calico-system/calico-node-vlhss" Oct 29 00:42:19.289384 kubelet[2796]: I1029 00:42:19.289013 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9dc4ef66-b051-4a99-b265-124e0920fea2-lib-modules\") pod \"calico-node-vlhss\" (UID: \"9dc4ef66-b051-4a99-b265-124e0920fea2\") " pod="calico-system/calico-node-vlhss" Oct 29 00:42:19.289384 kubelet[2796]: I1029 00:42:19.289060 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9dc4ef66-b051-4a99-b265-124e0920fea2-policysync\") pod \"calico-node-vlhss\" (UID: \"9dc4ef66-b051-4a99-b265-124e0920fea2\") " pod="calico-system/calico-node-vlhss" Oct 29 00:42:19.289384 kubelet[2796]: I1029 00:42:19.289085 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjbzm\" (UniqueName: \"kubernetes.io/projected/9dc4ef66-b051-4a99-b265-124e0920fea2-kube-api-access-jjbzm\") pod \"calico-node-vlhss\" (UID: \"9dc4ef66-b051-4a99-b265-124e0920fea2\") " pod="calico-system/calico-node-vlhss" Oct 29 00:42:19.289384 kubelet[2796]: I1029 00:42:19.289108 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9dc4ef66-b051-4a99-b265-124e0920fea2-cni-log-dir\") pod \"calico-node-vlhss\" (UID: \"9dc4ef66-b051-4a99-b265-124e0920fea2\") " pod="calico-system/calico-node-vlhss" Oct 29 00:42:19.289384 kubelet[2796]: I1029 00:42:19.289129 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9dc4ef66-b051-4a99-b265-124e0920fea2-cni-bin-dir\") pod \"calico-node-vlhss\" (UID: \"9dc4ef66-b051-4a99-b265-124e0920fea2\") " pod="calico-system/calico-node-vlhss" Oct 29 00:42:19.289534 kubelet[2796]: I1029 00:42:19.289150 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9dc4ef66-b051-4a99-b265-124e0920fea2-tigera-ca-bundle\") pod \"calico-node-vlhss\" (UID: \"9dc4ef66-b051-4a99-b265-124e0920fea2\") " pod="calico-system/calico-node-vlhss" Oct 29 00:42:19.289534 kubelet[2796]: I1029 00:42:19.289203 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9dc4ef66-b051-4a99-b265-124e0920fea2-var-run-calico\") pod \"calico-node-vlhss\" (UID: \"9dc4ef66-b051-4a99-b265-124e0920fea2\") " pod="calico-system/calico-node-vlhss" Oct 29 00:42:19.289534 kubelet[2796]: I1029 00:42:19.289240 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9dc4ef66-b051-4a99-b265-124e0920fea2-cni-net-dir\") pod \"calico-node-vlhss\" (UID: \"9dc4ef66-b051-4a99-b265-124e0920fea2\") " pod="calico-system/calico-node-vlhss" Oct 29 00:42:19.289534 kubelet[2796]: I1029 00:42:19.289269 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9dc4ef66-b051-4a99-b265-124e0920fea2-var-lib-calico\") pod \"calico-node-vlhss\" (UID: \"9dc4ef66-b051-4a99-b265-124e0920fea2\") " pod="calico-system/calico-node-vlhss" Oct 29 00:42:19.289534 kubelet[2796]: I1029 00:42:19.289289 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9dc4ef66-b051-4a99-b265-124e0920fea2-xtables-lock\") pod \"calico-node-vlhss\" (UID: \"9dc4ef66-b051-4a99-b265-124e0920fea2\") " pod="calico-system/calico-node-vlhss" Oct 29 00:42:19.289647 kubelet[2796]: I1029 00:42:19.289311 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9dc4ef66-b051-4a99-b265-124e0920fea2-node-certs\") pod \"calico-node-vlhss\" (UID: \"9dc4ef66-b051-4a99-b265-124e0920fea2\") " pod="calico-system/calico-node-vlhss" Oct 29 00:42:19.397942 kubelet[2796]: E1029 00:42:19.397907 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.397942 kubelet[2796]: W1029 00:42:19.397935 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.398126 kubelet[2796]: E1029 00:42:19.398021 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.398272 kubelet[2796]: E1029 00:42:19.398254 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.398272 kubelet[2796]: W1029 00:42:19.398268 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.398342 kubelet[2796]: E1029 00:42:19.398279 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.429231 kubelet[2796]: E1029 00:42:19.429187 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.429231 kubelet[2796]: W1029 00:42:19.429214 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.429231 kubelet[2796]: E1029 00:42:19.429239 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.485487 containerd[1632]: time="2025-10-29T00:42:19.485419902Z" level=info msg="connecting to shim 9780cb776a15467751c15b34306d59ef38b7a5dfcf381ac9771c20ec8e40521b" address="unix:///run/containerd/s/4f805b78abc10f5806ded0d8e87264adb16d4ea04be802f304fcd1592d74cda0" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:19.515174 systemd[1]: Started cri-containerd-9780cb776a15467751c15b34306d59ef38b7a5dfcf381ac9771c20ec8e40521b.scope - libcontainer container 9780cb776a15467751c15b34306d59ef38b7a5dfcf381ac9771c20ec8e40521b. Oct 29 00:42:19.560787 kubelet[2796]: E1029 00:42:19.560748 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:19.561291 containerd[1632]: time="2025-10-29T00:42:19.561259236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vlhss,Uid:9dc4ef66-b051-4a99-b265-124e0920fea2,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:19.572395 containerd[1632]: time="2025-10-29T00:42:19.572334907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67d58876dd-zztgd,Uid:6f4ddee3-61c8-4e2d-9012-e46c09d58976,Namespace:calico-system,Attempt:0,} returns sandbox id \"9780cb776a15467751c15b34306d59ef38b7a5dfcf381ac9771c20ec8e40521b\"" Oct 29 00:42:19.573523 kubelet[2796]: E1029 00:42:19.573107 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:19.573789 containerd[1632]: time="2025-10-29T00:42:19.573762336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 29 00:42:19.635021 kubelet[2796]: E1029 00:42:19.634929 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfhx9" podUID="06790988-73f1-4592-ba5d-833c8bb13f59" Oct 29 00:42:19.686652 kubelet[2796]: E1029 00:42:19.686613 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.686652 kubelet[2796]: W1029 00:42:19.686640 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.686652 kubelet[2796]: E1029 00:42:19.686668 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.686881 kubelet[2796]: E1029 00:42:19.686873 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.686913 kubelet[2796]: W1029 00:42:19.686882 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.686913 kubelet[2796]: E1029 00:42:19.686892 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.687150 kubelet[2796]: E1029 00:42:19.687134 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.687150 kubelet[2796]: W1029 00:42:19.687144 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.687226 kubelet[2796]: E1029 00:42:19.687155 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.687442 kubelet[2796]: E1029 00:42:19.687422 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.687518 kubelet[2796]: W1029 00:42:19.687437 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.687518 kubelet[2796]: E1029 00:42:19.687491 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.687713 kubelet[2796]: E1029 00:42:19.687699 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.687713 kubelet[2796]: W1029 00:42:19.687713 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.687788 kubelet[2796]: E1029 00:42:19.687723 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.687927 kubelet[2796]: E1029 00:42:19.687912 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.687927 kubelet[2796]: W1029 00:42:19.687923 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.687927 kubelet[2796]: E1029 00:42:19.687934 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.688165 kubelet[2796]: E1029 00:42:19.688146 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.688165 kubelet[2796]: W1029 00:42:19.688159 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.688165 kubelet[2796]: E1029 00:42:19.688170 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.688506 kubelet[2796]: E1029 00:42:19.688485 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.688506 kubelet[2796]: W1029 00:42:19.688501 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.688613 kubelet[2796]: E1029 00:42:19.688513 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.688868 kubelet[2796]: E1029 00:42:19.688748 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.688868 kubelet[2796]: W1029 00:42:19.688764 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.688868 kubelet[2796]: E1029 00:42:19.688775 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.689141 kubelet[2796]: E1029 00:42:19.689125 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.689229 kubelet[2796]: W1029 00:42:19.689213 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.689299 kubelet[2796]: E1029 00:42:19.689285 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.689582 kubelet[2796]: E1029 00:42:19.689562 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.689582 kubelet[2796]: W1029 00:42:19.689577 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.689582 kubelet[2796]: E1029 00:42:19.689591 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.689793 kubelet[2796]: E1029 00:42:19.689777 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.689793 kubelet[2796]: W1029 00:42:19.689789 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.689917 kubelet[2796]: E1029 00:42:19.689799 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.690055 kubelet[2796]: E1029 00:42:19.690021 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.690055 kubelet[2796]: W1029 00:42:19.690031 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.690055 kubelet[2796]: E1029 00:42:19.690042 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.690349 kubelet[2796]: E1029 00:42:19.690321 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.690349 kubelet[2796]: W1029 00:42:19.690342 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.690349 kubelet[2796]: E1029 00:42:19.690355 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.690610 kubelet[2796]: E1029 00:42:19.690590 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.690610 kubelet[2796]: W1029 00:42:19.690603 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.690716 kubelet[2796]: E1029 00:42:19.690644 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.690907 kubelet[2796]: E1029 00:42:19.690890 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.690907 kubelet[2796]: W1029 00:42:19.690904 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.691148 kubelet[2796]: E1029 00:42:19.690915 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.691148 kubelet[2796]: E1029 00:42:19.691143 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.691331 kubelet[2796]: W1029 00:42:19.691154 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.691331 kubelet[2796]: E1029 00:42:19.691164 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.691623 kubelet[2796]: E1029 00:42:19.691333 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.691623 kubelet[2796]: W1029 00:42:19.691342 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.691623 kubelet[2796]: E1029 00:42:19.691352 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.691623 kubelet[2796]: E1029 00:42:19.691533 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.691623 kubelet[2796]: W1029 00:42:19.691543 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.691623 kubelet[2796]: E1029 00:42:19.691553 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.691810 kubelet[2796]: E1029 00:42:19.691721 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.691810 kubelet[2796]: W1029 00:42:19.691730 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.691810 kubelet[2796]: E1029 00:42:19.691740 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.694366 kubelet[2796]: E1029 00:42:19.694340 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.694366 kubelet[2796]: W1029 00:42:19.694360 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.694495 kubelet[2796]: E1029 00:42:19.694376 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.694495 kubelet[2796]: I1029 00:42:19.694412 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/06790988-73f1-4592-ba5d-833c8bb13f59-varrun\") pod \"csi-node-driver-dfhx9\" (UID: \"06790988-73f1-4592-ba5d-833c8bb13f59\") " pod="calico-system/csi-node-driver-dfhx9" Oct 29 00:42:19.694728 kubelet[2796]: E1029 00:42:19.694706 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.694728 kubelet[2796]: W1029 00:42:19.694724 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.694728 kubelet[2796]: E1029 00:42:19.694738 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.695033 kubelet[2796]: I1029 00:42:19.694766 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/06790988-73f1-4592-ba5d-833c8bb13f59-socket-dir\") pod \"csi-node-driver-dfhx9\" (UID: \"06790988-73f1-4592-ba5d-833c8bb13f59\") " pod="calico-system/csi-node-driver-dfhx9" Oct 29 00:42:19.695134 kubelet[2796]: E1029 00:42:19.695037 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.695134 kubelet[2796]: W1029 00:42:19.695051 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.695134 kubelet[2796]: E1029 00:42:19.695064 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.697228 kubelet[2796]: E1029 00:42:19.697121 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.697228 kubelet[2796]: W1029 00:42:19.697146 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.697228 kubelet[2796]: E1029 00:42:19.697169 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.698101 kubelet[2796]: E1029 00:42:19.697443 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.698101 kubelet[2796]: W1029 00:42:19.697454 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.698101 kubelet[2796]: E1029 00:42:19.697466 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.698101 kubelet[2796]: I1029 00:42:19.697500 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06790988-73f1-4592-ba5d-833c8bb13f59-kubelet-dir\") pod \"csi-node-driver-dfhx9\" (UID: \"06790988-73f1-4592-ba5d-833c8bb13f59\") " pod="calico-system/csi-node-driver-dfhx9" Oct 29 00:42:19.698101 kubelet[2796]: E1029 00:42:19.697695 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.698101 kubelet[2796]: W1029 00:42:19.697708 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.698101 kubelet[2796]: E1029 00:42:19.697720 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.698101 kubelet[2796]: I1029 00:42:19.697808 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/06790988-73f1-4592-ba5d-833c8bb13f59-registration-dir\") pod \"csi-node-driver-dfhx9\" (UID: \"06790988-73f1-4592-ba5d-833c8bb13f59\") " pod="calico-system/csi-node-driver-dfhx9" Oct 29 00:42:19.698101 kubelet[2796]: E1029 00:42:19.697972 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.698354 kubelet[2796]: W1029 00:42:19.697983 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.698354 kubelet[2796]: E1029 00:42:19.698014 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.698354 kubelet[2796]: E1029 00:42:19.698221 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.698354 kubelet[2796]: W1029 00:42:19.698231 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.698354 kubelet[2796]: E1029 00:42:19.698242 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.699028 kubelet[2796]: E1029 00:42:19.698541 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.699028 kubelet[2796]: W1029 00:42:19.698558 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.699028 kubelet[2796]: E1029 00:42:19.698629 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.699028 kubelet[2796]: E1029 00:42:19.698864 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.699028 kubelet[2796]: W1029 00:42:19.698876 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.699028 kubelet[2796]: E1029 00:42:19.698887 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.701281 kubelet[2796]: E1029 00:42:19.699138 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.701281 kubelet[2796]: W1029 00:42:19.699149 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.701281 kubelet[2796]: E1029 00:42:19.699160 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.701281 kubelet[2796]: I1029 00:42:19.699187 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdnkj\" (UniqueName: \"kubernetes.io/projected/06790988-73f1-4592-ba5d-833c8bb13f59-kube-api-access-wdnkj\") pod \"csi-node-driver-dfhx9\" (UID: \"06790988-73f1-4592-ba5d-833c8bb13f59\") " pod="calico-system/csi-node-driver-dfhx9" Oct 29 00:42:19.701281 kubelet[2796]: E1029 00:42:19.700072 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.701281 kubelet[2796]: W1029 00:42:19.700089 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.701281 kubelet[2796]: E1029 00:42:19.700104 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.701707 kubelet[2796]: E1029 00:42:19.701546 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.701707 kubelet[2796]: W1029 00:42:19.701562 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.701707 kubelet[2796]: E1029 00:42:19.701576 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.701879 kubelet[2796]: E1029 00:42:19.701864 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.701972 kubelet[2796]: W1029 00:42:19.701940 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.701972 kubelet[2796]: E1029 00:42:19.701959 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.702516 kubelet[2796]: E1029 00:42:19.702286 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.702516 kubelet[2796]: W1029 00:42:19.702303 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.702516 kubelet[2796]: E1029 00:42:19.702318 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.728373 containerd[1632]: time="2025-10-29T00:42:19.728311801Z" level=info msg="connecting to shim 54f28b6baaf8b6f2dbdd59d300787360db26219933e42bf9c6487dcdfb0ca45e" address="unix:///run/containerd/s/a1358ea371e21275f9685dcc11920f16ec6aeb36cea47f2700b706da4e49de3b" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:19.758200 systemd[1]: Started cri-containerd-54f28b6baaf8b6f2dbdd59d300787360db26219933e42bf9c6487dcdfb0ca45e.scope - libcontainer container 54f28b6baaf8b6f2dbdd59d300787360db26219933e42bf9c6487dcdfb0ca45e. Oct 29 00:42:19.795465 containerd[1632]: time="2025-10-29T00:42:19.793529108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vlhss,Uid:9dc4ef66-b051-4a99-b265-124e0920fea2,Namespace:calico-system,Attempt:0,} returns sandbox id \"54f28b6baaf8b6f2dbdd59d300787360db26219933e42bf9c6487dcdfb0ca45e\"" Oct 29 00:42:19.796611 kubelet[2796]: E1029 00:42:19.796585 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:19.800069 kubelet[2796]: E1029 00:42:19.800044 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.800069 kubelet[2796]: W1029 00:42:19.800062 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.800236 kubelet[2796]: E1029 00:42:19.800084 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.800380 kubelet[2796]: E1029 00:42:19.800351 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.800380 kubelet[2796]: W1029 00:42:19.800370 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.800452 kubelet[2796]: E1029 00:42:19.800382 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.800968 kubelet[2796]: E1029 00:42:19.800655 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.800968 kubelet[2796]: W1029 00:42:19.800943 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.800968 kubelet[2796]: E1029 00:42:19.800956 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.801551 kubelet[2796]: E1029 00:42:19.801520 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.801551 kubelet[2796]: W1029 00:42:19.801534 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.801551 kubelet[2796]: E1029 00:42:19.801545 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.801932 kubelet[2796]: E1029 00:42:19.801901 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.801932 kubelet[2796]: W1029 00:42:19.801915 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.801932 kubelet[2796]: E1029 00:42:19.801927 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.802396 kubelet[2796]: E1029 00:42:19.802365 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.802396 kubelet[2796]: W1029 00:42:19.802383 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.802483 kubelet[2796]: E1029 00:42:19.802397 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.802607 kubelet[2796]: E1029 00:42:19.802591 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.802607 kubelet[2796]: W1029 00:42:19.802603 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.802707 kubelet[2796]: E1029 00:42:19.802623 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.802856 kubelet[2796]: E1029 00:42:19.802807 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.802856 kubelet[2796]: W1029 00:42:19.802822 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.802856 kubelet[2796]: E1029 00:42:19.802830 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.803289 kubelet[2796]: E1029 00:42:19.803271 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.803289 kubelet[2796]: W1029 00:42:19.803284 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.803560 kubelet[2796]: E1029 00:42:19.803293 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.803560 kubelet[2796]: E1029 00:42:19.803524 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.803560 kubelet[2796]: W1029 00:42:19.803532 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.803560 kubelet[2796]: E1029 00:42:19.803541 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.803775 kubelet[2796]: E1029 00:42:19.803751 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.803775 kubelet[2796]: W1029 00:42:19.803764 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.803775 kubelet[2796]: E1029 00:42:19.803774 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.804019 kubelet[2796]: E1029 00:42:19.803980 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.804019 kubelet[2796]: W1029 00:42:19.804006 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.804019 kubelet[2796]: E1029 00:42:19.804015 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.804270 kubelet[2796]: E1029 00:42:19.804253 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.804270 kubelet[2796]: W1029 00:42:19.804265 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.804361 kubelet[2796]: E1029 00:42:19.804275 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.804488 kubelet[2796]: E1029 00:42:19.804452 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.804488 kubelet[2796]: W1029 00:42:19.804469 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.804488 kubelet[2796]: E1029 00:42:19.804478 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.804776 kubelet[2796]: E1029 00:42:19.804752 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.804776 kubelet[2796]: W1029 00:42:19.804764 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.804776 kubelet[2796]: E1029 00:42:19.804772 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.805027 kubelet[2796]: E1029 00:42:19.805011 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.805027 kubelet[2796]: W1029 00:42:19.805021 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.805027 kubelet[2796]: E1029 00:42:19.805029 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.805239 kubelet[2796]: E1029 00:42:19.805224 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.805239 kubelet[2796]: W1029 00:42:19.805233 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.805239 kubelet[2796]: E1029 00:42:19.805241 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.805678 kubelet[2796]: E1029 00:42:19.805653 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.805678 kubelet[2796]: W1029 00:42:19.805666 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.805678 kubelet[2796]: E1029 00:42:19.805674 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.805898 kubelet[2796]: E1029 00:42:19.805875 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.805898 kubelet[2796]: W1029 00:42:19.805888 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.805898 kubelet[2796]: E1029 00:42:19.805897 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.806154 kubelet[2796]: E1029 00:42:19.806139 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.806154 kubelet[2796]: W1029 00:42:19.806148 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.806154 kubelet[2796]: E1029 00:42:19.806156 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.806343 kubelet[2796]: E1029 00:42:19.806329 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.806343 kubelet[2796]: W1029 00:42:19.806338 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.806432 kubelet[2796]: E1029 00:42:19.806346 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.806557 kubelet[2796]: E1029 00:42:19.806542 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.806557 kubelet[2796]: W1029 00:42:19.806551 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.806622 kubelet[2796]: E1029 00:42:19.806558 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.806812 kubelet[2796]: E1029 00:42:19.806796 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.806812 kubelet[2796]: W1029 00:42:19.806807 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.806889 kubelet[2796]: E1029 00:42:19.806815 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.807103 kubelet[2796]: E1029 00:42:19.807078 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.807103 kubelet[2796]: W1029 00:42:19.807090 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.807103 kubelet[2796]: E1029 00:42:19.807098 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.808116 kubelet[2796]: E1029 00:42:19.808090 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.808116 kubelet[2796]: W1029 00:42:19.808112 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.808116 kubelet[2796]: E1029 00:42:19.808127 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:19.815951 kubelet[2796]: E1029 00:42:19.815916 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:19.815951 kubelet[2796]: W1029 00:42:19.815939 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:19.815951 kubelet[2796]: E1029 00:42:19.815975 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:21.183087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2227574966.mount: Deactivated successfully. Oct 29 00:42:21.448831 kubelet[2796]: E1029 00:42:21.448654 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfhx9" podUID="06790988-73f1-4592-ba5d-833c8bb13f59" Oct 29 00:42:21.632629 containerd[1632]: time="2025-10-29T00:42:21.632552541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:21.633803 containerd[1632]: time="2025-10-29T00:42:21.633775614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 29 00:42:21.634968 containerd[1632]: time="2025-10-29T00:42:21.634927973Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:21.637342 containerd[1632]: time="2025-10-29T00:42:21.637291603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:21.638013 containerd[1632]: time="2025-10-29T00:42:21.637947657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.064151337s" Oct 29 00:42:21.638013 containerd[1632]: time="2025-10-29T00:42:21.637984668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 29 00:42:21.639375 containerd[1632]: time="2025-10-29T00:42:21.638950335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 29 00:42:21.652024 containerd[1632]: time="2025-10-29T00:42:21.651786981Z" level=info msg="CreateContainer within sandbox \"9780cb776a15467751c15b34306d59ef38b7a5dfcf381ac9771c20ec8e40521b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 29 00:42:21.660683 containerd[1632]: time="2025-10-29T00:42:21.660638563Z" level=info msg="Container 7c81c14ee9e97c61832cd8b004f30c6d7e6584b89aeb239b57174758c6d03761: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:42:21.671310 containerd[1632]: time="2025-10-29T00:42:21.671246864Z" level=info msg="CreateContainer within sandbox \"9780cb776a15467751c15b34306d59ef38b7a5dfcf381ac9771c20ec8e40521b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7c81c14ee9e97c61832cd8b004f30c6d7e6584b89aeb239b57174758c6d03761\"" Oct 29 00:42:21.672224 containerd[1632]: time="2025-10-29T00:42:21.672172326Z" level=info msg="StartContainer for \"7c81c14ee9e97c61832cd8b004f30c6d7e6584b89aeb239b57174758c6d03761\"" Oct 29 00:42:21.673843 containerd[1632]: time="2025-10-29T00:42:21.673808036Z" level=info msg="connecting to shim 7c81c14ee9e97c61832cd8b004f30c6d7e6584b89aeb239b57174758c6d03761" address="unix:///run/containerd/s/4f805b78abc10f5806ded0d8e87264adb16d4ea04be802f304fcd1592d74cda0" protocol=ttrpc version=3 Oct 29 00:42:21.711325 systemd[1]: Started cri-containerd-7c81c14ee9e97c61832cd8b004f30c6d7e6584b89aeb239b57174758c6d03761.scope - libcontainer container 7c81c14ee9e97c61832cd8b004f30c6d7e6584b89aeb239b57174758c6d03761. Oct 29 00:42:21.765595 containerd[1632]: time="2025-10-29T00:42:21.765469681Z" level=info msg="StartContainer for \"7c81c14ee9e97c61832cd8b004f30c6d7e6584b89aeb239b57174758c6d03761\" returns successfully" Oct 29 00:42:22.527354 kubelet[2796]: E1029 00:42:22.527308 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:22.609402 kubelet[2796]: E1029 00:42:22.609341 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.609402 kubelet[2796]: W1029 00:42:22.609371 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.609402 kubelet[2796]: E1029 00:42:22.609399 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.609685 kubelet[2796]: E1029 00:42:22.609600 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.609685 kubelet[2796]: W1029 00:42:22.609609 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.609685 kubelet[2796]: E1029 00:42:22.609618 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.609829 kubelet[2796]: E1029 00:42:22.609802 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.609829 kubelet[2796]: W1029 00:42:22.609816 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.609829 kubelet[2796]: E1029 00:42:22.609828 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.610142 kubelet[2796]: E1029 00:42:22.610121 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.610142 kubelet[2796]: W1029 00:42:22.610134 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.610235 kubelet[2796]: E1029 00:42:22.610145 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.610720 kubelet[2796]: E1029 00:42:22.610695 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.610720 kubelet[2796]: W1029 00:42:22.610711 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.610837 kubelet[2796]: E1029 00:42:22.610724 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.610959 kubelet[2796]: E1029 00:42:22.610935 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.610959 kubelet[2796]: W1029 00:42:22.610948 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.611043 kubelet[2796]: E1029 00:42:22.610962 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.611325 kubelet[2796]: E1029 00:42:22.611279 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.611325 kubelet[2796]: W1029 00:42:22.611308 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.611325 kubelet[2796]: E1029 00:42:22.611341 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.612631 kubelet[2796]: E1029 00:42:22.611711 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.612631 kubelet[2796]: W1029 00:42:22.611721 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.612631 kubelet[2796]: E1029 00:42:22.611734 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.612631 kubelet[2796]: E1029 00:42:22.612132 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.612631 kubelet[2796]: W1029 00:42:22.612146 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.612631 kubelet[2796]: E1029 00:42:22.612160 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.612631 kubelet[2796]: E1029 00:42:22.612479 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.612631 kubelet[2796]: W1029 00:42:22.612491 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.612631 kubelet[2796]: E1029 00:42:22.612505 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.612951 kubelet[2796]: E1029 00:42:22.612759 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.612951 kubelet[2796]: W1029 00:42:22.612774 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.612951 kubelet[2796]: E1029 00:42:22.612791 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.614017 kubelet[2796]: E1029 00:42:22.613959 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.614017 kubelet[2796]: W1029 00:42:22.613981 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.614017 kubelet[2796]: E1029 00:42:22.614007 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.614273 kubelet[2796]: E1029 00:42:22.614251 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.614273 kubelet[2796]: W1029 00:42:22.614264 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.614273 kubelet[2796]: E1029 00:42:22.614276 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.614477 kubelet[2796]: E1029 00:42:22.614440 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.614477 kubelet[2796]: W1029 00:42:22.614451 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.614477 kubelet[2796]: E1029 00:42:22.614458 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.614664 kubelet[2796]: E1029 00:42:22.614641 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.614664 kubelet[2796]: W1029 00:42:22.614653 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.614664 kubelet[2796]: E1029 00:42:22.614661 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.624425 kubelet[2796]: E1029 00:42:22.624385 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.624425 kubelet[2796]: W1029 00:42:22.624413 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.624531 kubelet[2796]: E1029 00:42:22.624438 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.624711 kubelet[2796]: E1029 00:42:22.624687 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.624711 kubelet[2796]: W1029 00:42:22.624699 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.624711 kubelet[2796]: E1029 00:42:22.624707 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.624970 kubelet[2796]: E1029 00:42:22.624947 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.624970 kubelet[2796]: W1029 00:42:22.624961 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.624970 kubelet[2796]: E1029 00:42:22.624969 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.625406 kubelet[2796]: E1029 00:42:22.625367 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.625406 kubelet[2796]: W1029 00:42:22.625397 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.625469 kubelet[2796]: E1029 00:42:22.625420 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.625683 kubelet[2796]: E1029 00:42:22.625656 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.625683 kubelet[2796]: W1029 00:42:22.625672 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.625683 kubelet[2796]: E1029 00:42:22.625683 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.625969 kubelet[2796]: E1029 00:42:22.625936 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.625969 kubelet[2796]: W1029 00:42:22.625957 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.626051 kubelet[2796]: E1029 00:42:22.625972 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.626262 kubelet[2796]: E1029 00:42:22.626235 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.626262 kubelet[2796]: W1029 00:42:22.626253 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.626310 kubelet[2796]: E1029 00:42:22.626265 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.626484 kubelet[2796]: E1029 00:42:22.626466 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.626484 kubelet[2796]: W1029 00:42:22.626481 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.626537 kubelet[2796]: E1029 00:42:22.626493 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.626734 kubelet[2796]: E1029 00:42:22.626714 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.626734 kubelet[2796]: W1029 00:42:22.626731 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.626801 kubelet[2796]: E1029 00:42:22.626745 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.627034 kubelet[2796]: E1029 00:42:22.627015 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.627034 kubelet[2796]: W1029 00:42:22.627031 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.627102 kubelet[2796]: E1029 00:42:22.627043 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.627291 kubelet[2796]: E1029 00:42:22.627268 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.627291 kubelet[2796]: W1029 00:42:22.627283 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.627361 kubelet[2796]: E1029 00:42:22.627296 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.627544 kubelet[2796]: E1029 00:42:22.627529 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.627544 kubelet[2796]: W1029 00:42:22.627539 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.627586 kubelet[2796]: E1029 00:42:22.627549 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.627825 kubelet[2796]: E1029 00:42:22.627805 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.627825 kubelet[2796]: W1029 00:42:22.627820 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.627880 kubelet[2796]: E1029 00:42:22.627830 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.628057 kubelet[2796]: E1029 00:42:22.628040 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.628057 kubelet[2796]: W1029 00:42:22.628052 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.628123 kubelet[2796]: E1029 00:42:22.628061 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.628245 kubelet[2796]: E1029 00:42:22.628225 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.628245 kubelet[2796]: W1029 00:42:22.628235 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.628245 kubelet[2796]: E1029 00:42:22.628243 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.628417 kubelet[2796]: E1029 00:42:22.628403 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.628417 kubelet[2796]: W1029 00:42:22.628413 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.628467 kubelet[2796]: E1029 00:42:22.628421 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.628625 kubelet[2796]: E1029 00:42:22.628609 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.628625 kubelet[2796]: W1029 00:42:22.628619 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.628675 kubelet[2796]: E1029 00:42:22.628627 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.629105 kubelet[2796]: E1029 00:42:22.629088 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:22.629105 kubelet[2796]: W1029 00:42:22.629099 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:22.629105 kubelet[2796]: E1029 00:42:22.629109 2796 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:22.937787 containerd[1632]: time="2025-10-29T00:42:22.937739173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:22.938498 containerd[1632]: time="2025-10-29T00:42:22.938470470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 29 00:42:22.939619 containerd[1632]: time="2025-10-29T00:42:22.939581400Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:22.941631 containerd[1632]: time="2025-10-29T00:42:22.941576215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:22.942116 containerd[1632]: time="2025-10-29T00:42:22.942084441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.303098369s" Oct 29 00:42:22.942116 containerd[1632]: time="2025-10-29T00:42:22.942112243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 29 00:42:22.945883 containerd[1632]: time="2025-10-29T00:42:22.945828868Z" level=info msg="CreateContainer within sandbox \"54f28b6baaf8b6f2dbdd59d300787360db26219933e42bf9c6487dcdfb0ca45e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 29 00:42:22.955682 containerd[1632]: time="2025-10-29T00:42:22.955616450Z" level=info msg="Container ea643f92de2044d2a909eaa3838a1629e16e0f446beb5511ca4effb2270aecf9: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:42:22.964498 containerd[1632]: time="2025-10-29T00:42:22.964435066Z" level=info msg="CreateContainer within sandbox \"54f28b6baaf8b6f2dbdd59d300787360db26219933e42bf9c6487dcdfb0ca45e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ea643f92de2044d2a909eaa3838a1629e16e0f446beb5511ca4effb2270aecf9\"" Oct 29 00:42:22.965210 containerd[1632]: time="2025-10-29T00:42:22.965180389Z" level=info msg="StartContainer for \"ea643f92de2044d2a909eaa3838a1629e16e0f446beb5511ca4effb2270aecf9\"" Oct 29 00:42:22.967648 containerd[1632]: time="2025-10-29T00:42:22.967612878Z" level=info msg="connecting to shim ea643f92de2044d2a909eaa3838a1629e16e0f446beb5511ca4effb2270aecf9" address="unix:///run/containerd/s/a1358ea371e21275f9685dcc11920f16ec6aeb36cea47f2700b706da4e49de3b" protocol=ttrpc version=3 Oct 29 00:42:22.993115 systemd[1]: Started cri-containerd-ea643f92de2044d2a909eaa3838a1629e16e0f446beb5511ca4effb2270aecf9.scope - libcontainer container ea643f92de2044d2a909eaa3838a1629e16e0f446beb5511ca4effb2270aecf9. Oct 29 00:42:23.041523 containerd[1632]: time="2025-10-29T00:42:23.041474684Z" level=info msg="StartContainer for \"ea643f92de2044d2a909eaa3838a1629e16e0f446beb5511ca4effb2270aecf9\" returns successfully" Oct 29 00:42:23.053197 systemd[1]: cri-containerd-ea643f92de2044d2a909eaa3838a1629e16e0f446beb5511ca4effb2270aecf9.scope: Deactivated successfully. Oct 29 00:42:23.056062 containerd[1632]: time="2025-10-29T00:42:23.055983324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea643f92de2044d2a909eaa3838a1629e16e0f446beb5511ca4effb2270aecf9\" id:\"ea643f92de2044d2a909eaa3838a1629e16e0f446beb5511ca4effb2270aecf9\" pid:3520 exited_at:{seconds:1761698543 nanos:55524070}" Oct 29 00:42:23.056138 containerd[1632]: time="2025-10-29T00:42:23.055985298Z" level=info msg="received exit event container_id:\"ea643f92de2044d2a909eaa3838a1629e16e0f446beb5511ca4effb2270aecf9\" id:\"ea643f92de2044d2a909eaa3838a1629e16e0f446beb5511ca4effb2270aecf9\" pid:3520 exited_at:{seconds:1761698543 nanos:55524070}" Oct 29 00:42:23.080806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea643f92de2044d2a909eaa3838a1629e16e0f446beb5511ca4effb2270aecf9-rootfs.mount: Deactivated successfully. Oct 29 00:42:23.448486 kubelet[2796]: E1029 00:42:23.448401 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfhx9" podUID="06790988-73f1-4592-ba5d-833c8bb13f59" Oct 29 00:42:23.532209 kubelet[2796]: I1029 00:42:23.532166 2796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:42:23.532718 kubelet[2796]: E1029 00:42:23.532542 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:23.532718 kubelet[2796]: E1029 00:42:23.532662 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:23.548696 kubelet[2796]: I1029 00:42:23.548615 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67d58876dd-zztgd" podStartSLOduration=3.483315287 podStartE2EDuration="5.548591332s" podCreationTimestamp="2025-10-29 00:42:18 +0000 UTC" firstStartedPulling="2025-10-29 00:42:19.573535439 +0000 UTC m=+26.227983400" lastFinishedPulling="2025-10-29 00:42:21.638811484 +0000 UTC m=+28.293259445" observedRunningTime="2025-10-29 00:42:22.538541885 +0000 UTC m=+29.192989856" watchObservedRunningTime="2025-10-29 00:42:23.548591332 +0000 UTC m=+30.203039293" Oct 29 00:42:24.536922 kubelet[2796]: E1029 00:42:24.536843 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:24.538606 containerd[1632]: time="2025-10-29T00:42:24.538549198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 29 00:42:25.448156 kubelet[2796]: E1029 00:42:25.448070 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfhx9" podUID="06790988-73f1-4592-ba5d-833c8bb13f59" Oct 29 00:42:27.448639 kubelet[2796]: E1029 00:42:27.448572 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfhx9" podUID="06790988-73f1-4592-ba5d-833c8bb13f59" Oct 29 00:42:27.604113 containerd[1632]: time="2025-10-29T00:42:27.603980952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:27.606064 containerd[1632]: time="2025-10-29T00:42:27.606021509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 29 00:42:27.607687 containerd[1632]: time="2025-10-29T00:42:27.607641505Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:27.610752 containerd[1632]: time="2025-10-29T00:42:27.610692972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:27.611500 containerd[1632]: time="2025-10-29T00:42:27.611471295Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.072879798s" Oct 29 00:42:27.611557 containerd[1632]: time="2025-10-29T00:42:27.611500280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 29 00:42:27.616174 containerd[1632]: time="2025-10-29T00:42:27.616121228Z" level=info msg="CreateContainer within sandbox \"54f28b6baaf8b6f2dbdd59d300787360db26219933e42bf9c6487dcdfb0ca45e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 29 00:42:27.626031 containerd[1632]: time="2025-10-29T00:42:27.625961821Z" level=info msg="Container 8c24c378a49def7d2e9f6737a2bea6f2b79f69e7190db11a55df324f65e3079a: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:42:27.637451 containerd[1632]: time="2025-10-29T00:42:27.637377846Z" level=info msg="CreateContainer within sandbox \"54f28b6baaf8b6f2dbdd59d300787360db26219933e42bf9c6487dcdfb0ca45e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8c24c378a49def7d2e9f6737a2bea6f2b79f69e7190db11a55df324f65e3079a\"" Oct 29 00:42:27.639385 containerd[1632]: time="2025-10-29T00:42:27.637931707Z" level=info msg="StartContainer for \"8c24c378a49def7d2e9f6737a2bea6f2b79f69e7190db11a55df324f65e3079a\"" Oct 29 00:42:27.641108 containerd[1632]: time="2025-10-29T00:42:27.641062814Z" level=info msg="connecting to shim 8c24c378a49def7d2e9f6737a2bea6f2b79f69e7190db11a55df324f65e3079a" address="unix:///run/containerd/s/a1358ea371e21275f9685dcc11920f16ec6aeb36cea47f2700b706da4e49de3b" protocol=ttrpc version=3 Oct 29 00:42:27.671200 systemd[1]: Started cri-containerd-8c24c378a49def7d2e9f6737a2bea6f2b79f69e7190db11a55df324f65e3079a.scope - libcontainer container 8c24c378a49def7d2e9f6737a2bea6f2b79f69e7190db11a55df324f65e3079a. Oct 29 00:42:28.223438 containerd[1632]: time="2025-10-29T00:42:28.223366126Z" level=info msg="StartContainer for \"8c24c378a49def7d2e9f6737a2bea6f2b79f69e7190db11a55df324f65e3079a\" returns successfully" Oct 29 00:42:28.548364 kubelet[2796]: E1029 00:42:28.548236 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:29.448599 kubelet[2796]: E1029 00:42:29.448505 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfhx9" podUID="06790988-73f1-4592-ba5d-833c8bb13f59" Oct 29 00:42:29.548307 kubelet[2796]: E1029 00:42:29.548249 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:29.824194 systemd[1]: cri-containerd-8c24c378a49def7d2e9f6737a2bea6f2b79f69e7190db11a55df324f65e3079a.scope: Deactivated successfully. Oct 29 00:42:29.824614 systemd[1]: cri-containerd-8c24c378a49def7d2e9f6737a2bea6f2b79f69e7190db11a55df324f65e3079a.scope: Consumed 661ms CPU time, 180M memory peak, 3.4M read from disk, 171.3M written to disk. Oct 29 00:42:29.844415 containerd[1632]: time="2025-10-29T00:42:29.844333378Z" level=info msg="received exit event container_id:\"8c24c378a49def7d2e9f6737a2bea6f2b79f69e7190db11a55df324f65e3079a\" id:\"8c24c378a49def7d2e9f6737a2bea6f2b79f69e7190db11a55df324f65e3079a\" pid:3577 exited_at:{seconds:1761698549 nanos:826078706}" Oct 29 00:42:29.848038 containerd[1632]: time="2025-10-29T00:42:29.847934457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c24c378a49def7d2e9f6737a2bea6f2b79f69e7190db11a55df324f65e3079a\" id:\"8c24c378a49def7d2e9f6737a2bea6f2b79f69e7190db11a55df324f65e3079a\" pid:3577 exited_at:{seconds:1761698549 nanos:826078706}" Oct 29 00:42:29.875617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c24c378a49def7d2e9f6737a2bea6f2b79f69e7190db11a55df324f65e3079a-rootfs.mount: Deactivated successfully. Oct 29 00:42:29.896367 kubelet[2796]: I1029 00:42:29.896330 2796 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 29 00:42:30.093480 systemd[1]: Created slice kubepods-burstable-pod5905455c_a441_499c_8f77_8f1bcb5b5830.slice - libcontainer container kubepods-burstable-pod5905455c_a441_499c_8f77_8f1bcb5b5830.slice. Oct 29 00:42:30.106034 systemd[1]: Created slice kubepods-besteffort-pode6e3d24d_0964_48c5_ab21_4abb2f93d132.slice - libcontainer container kubepods-besteffort-pode6e3d24d_0964_48c5_ab21_4abb2f93d132.slice. Oct 29 00:42:30.113874 systemd[1]: Created slice kubepods-burstable-podd50e446a_d926_4232_912c_aaf27bd789fe.slice - libcontainer container kubepods-burstable-podd50e446a_d926_4232_912c_aaf27bd789fe.slice. Oct 29 00:42:30.123927 systemd[1]: Created slice kubepods-besteffort-pod63def325_7646_4955_b342_50757e8ccbe9.slice - libcontainer container kubepods-besteffort-pod63def325_7646_4955_b342_50757e8ccbe9.slice. Oct 29 00:42:30.134697 systemd[1]: Created slice kubepods-besteffort-pod0d9ba357_e9fe_4334_aa42_2c44f212b5ae.slice - libcontainer container kubepods-besteffort-pod0d9ba357_e9fe_4334_aa42_2c44f212b5ae.slice. Oct 29 00:42:30.142871 systemd[1]: Created slice kubepods-besteffort-pod67dad18a_63e2_479c_bc13_d9830637f19e.slice - libcontainer container kubepods-besteffort-pod67dad18a_63e2_479c_bc13_d9830637f19e.slice. Oct 29 00:42:30.150326 systemd[1]: Created slice kubepods-besteffort-pod94b96309_8719_4f92_83c6_e3ea53662334.slice - libcontainer container kubepods-besteffort-pod94b96309_8719_4f92_83c6_e3ea53662334.slice. Oct 29 00:42:30.157076 systemd[1]: Created slice kubepods-besteffort-pod71738b5b_00d0_40b6_ac2e_dcbe7140012d.slice - libcontainer container kubepods-besteffort-pod71738b5b_00d0_40b6_ac2e_dcbe7140012d.slice. Oct 29 00:42:30.175966 kubelet[2796]: I1029 00:42:30.175894 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63def325-7646-4955-b342-50757e8ccbe9-tigera-ca-bundle\") pod \"calico-kube-controllers-cf97f5b86-fqx7t\" (UID: \"63def325-7646-4955-b342-50757e8ccbe9\") " pod="calico-system/calico-kube-controllers-cf97f5b86-fqx7t" Oct 29 00:42:30.175966 kubelet[2796]: I1029 00:42:30.175947 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfgbn\" (UniqueName: \"kubernetes.io/projected/63def325-7646-4955-b342-50757e8ccbe9-kube-api-access-dfgbn\") pod \"calico-kube-controllers-cf97f5b86-fqx7t\" (UID: \"63def325-7646-4955-b342-50757e8ccbe9\") " pod="calico-system/calico-kube-controllers-cf97f5b86-fqx7t" Oct 29 00:42:30.175966 kubelet[2796]: I1029 00:42:30.175969 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbql9\" (UniqueName: \"kubernetes.io/projected/67dad18a-63e2-479c-bc13-d9830637f19e-kube-api-access-zbql9\") pod \"calico-apiserver-579cf9b788-8b2jb\" (UID: \"67dad18a-63e2-479c-bc13-d9830637f19e\") " pod="calico-apiserver/calico-apiserver-579cf9b788-8b2jb" Oct 29 00:42:30.176211 kubelet[2796]: I1029 00:42:30.176005 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71738b5b-00d0-40b6-ac2e-dcbe7140012d-whisker-ca-bundle\") pod \"whisker-6b6ff97c4-g4jj4\" (UID: \"71738b5b-00d0-40b6-ac2e-dcbe7140012d\") " pod="calico-system/whisker-6b6ff97c4-g4jj4" Oct 29 00:42:30.176211 kubelet[2796]: I1029 00:42:30.176023 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb89g\" (UniqueName: \"kubernetes.io/projected/71738b5b-00d0-40b6-ac2e-dcbe7140012d-kube-api-access-qb89g\") pod \"whisker-6b6ff97c4-g4jj4\" (UID: \"71738b5b-00d0-40b6-ac2e-dcbe7140012d\") " pod="calico-system/whisker-6b6ff97c4-g4jj4" Oct 29 00:42:30.176211 kubelet[2796]: I1029 00:42:30.176041 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csmsr\" (UniqueName: \"kubernetes.io/projected/e6e3d24d-0964-48c5-ab21-4abb2f93d132-kube-api-access-csmsr\") pod \"calico-apiserver-6555bc8b57-6t6f2\" (UID: \"e6e3d24d-0964-48c5-ab21-4abb2f93d132\") " pod="calico-apiserver/calico-apiserver-6555bc8b57-6t6f2" Oct 29 00:42:30.176211 kubelet[2796]: I1029 00:42:30.176057 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d9ba357-e9fe-4334-aa42-2c44f212b5ae-calico-apiserver-certs\") pod \"calico-apiserver-579cf9b788-p778f\" (UID: \"0d9ba357-e9fe-4334-aa42-2c44f212b5ae\") " pod="calico-apiserver/calico-apiserver-579cf9b788-p778f" Oct 29 00:42:30.176211 kubelet[2796]: I1029 00:42:30.176073 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrp5g\" (UniqueName: \"kubernetes.io/projected/0d9ba357-e9fe-4334-aa42-2c44f212b5ae-kube-api-access-jrp5g\") pod \"calico-apiserver-579cf9b788-p778f\" (UID: \"0d9ba357-e9fe-4334-aa42-2c44f212b5ae\") " pod="calico-apiserver/calico-apiserver-579cf9b788-p778f" Oct 29 00:42:30.176334 kubelet[2796]: I1029 00:42:30.176094 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e6e3d24d-0964-48c5-ab21-4abb2f93d132-calico-apiserver-certs\") pod \"calico-apiserver-6555bc8b57-6t6f2\" (UID: \"e6e3d24d-0964-48c5-ab21-4abb2f93d132\") " pod="calico-apiserver/calico-apiserver-6555bc8b57-6t6f2" Oct 29 00:42:30.176334 kubelet[2796]: I1029 00:42:30.176108 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/67dad18a-63e2-479c-bc13-d9830637f19e-calico-apiserver-certs\") pod \"calico-apiserver-579cf9b788-8b2jb\" (UID: \"67dad18a-63e2-479c-bc13-d9830637f19e\") " pod="calico-apiserver/calico-apiserver-579cf9b788-8b2jb" Oct 29 00:42:30.176334 kubelet[2796]: I1029 00:42:30.176126 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94b96309-8719-4f92-83c6-e3ea53662334-config\") pod \"goldmane-666569f655-jvtk4\" (UID: \"94b96309-8719-4f92-83c6-e3ea53662334\") " pod="calico-system/goldmane-666569f655-jvtk4" Oct 29 00:42:30.176334 kubelet[2796]: I1029 00:42:30.176144 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d50e446a-d926-4232-912c-aaf27bd789fe-config-volume\") pod \"coredns-674b8bbfcf-n9tnk\" (UID: \"d50e446a-d926-4232-912c-aaf27bd789fe\") " pod="kube-system/coredns-674b8bbfcf-n9tnk" Oct 29 00:42:30.176334 kubelet[2796]: I1029 00:42:30.176157 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvzfh\" (UniqueName: \"kubernetes.io/projected/d50e446a-d926-4232-912c-aaf27bd789fe-kube-api-access-vvzfh\") pod \"coredns-674b8bbfcf-n9tnk\" (UID: \"d50e446a-d926-4232-912c-aaf27bd789fe\") " pod="kube-system/coredns-674b8bbfcf-n9tnk" Oct 29 00:42:30.176512 kubelet[2796]: I1029 00:42:30.176193 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmvrw\" (UniqueName: \"kubernetes.io/projected/5905455c-a441-499c-8f77-8f1bcb5b5830-kube-api-access-xmvrw\") pod \"coredns-674b8bbfcf-5qxng\" (UID: \"5905455c-a441-499c-8f77-8f1bcb5b5830\") " pod="kube-system/coredns-674b8bbfcf-5qxng" Oct 29 00:42:30.176512 kubelet[2796]: I1029 00:42:30.176208 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/94b96309-8719-4f92-83c6-e3ea53662334-goldmane-key-pair\") pod \"goldmane-666569f655-jvtk4\" (UID: \"94b96309-8719-4f92-83c6-e3ea53662334\") " pod="calico-system/goldmane-666569f655-jvtk4" Oct 29 00:42:30.176512 kubelet[2796]: I1029 00:42:30.176224 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5905455c-a441-499c-8f77-8f1bcb5b5830-config-volume\") pod \"coredns-674b8bbfcf-5qxng\" (UID: \"5905455c-a441-499c-8f77-8f1bcb5b5830\") " pod="kube-system/coredns-674b8bbfcf-5qxng" Oct 29 00:42:30.176512 kubelet[2796]: I1029 00:42:30.176238 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94b96309-8719-4f92-83c6-e3ea53662334-goldmane-ca-bundle\") pod \"goldmane-666569f655-jvtk4\" (UID: \"94b96309-8719-4f92-83c6-e3ea53662334\") " pod="calico-system/goldmane-666569f655-jvtk4" Oct 29 00:42:30.176512 kubelet[2796]: I1029 00:42:30.176255 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xghbm\" (UniqueName: \"kubernetes.io/projected/94b96309-8719-4f92-83c6-e3ea53662334-kube-api-access-xghbm\") pod \"goldmane-666569f655-jvtk4\" (UID: \"94b96309-8719-4f92-83c6-e3ea53662334\") " pod="calico-system/goldmane-666569f655-jvtk4" Oct 29 00:42:30.176623 kubelet[2796]: I1029 00:42:30.176269 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/71738b5b-00d0-40b6-ac2e-dcbe7140012d-whisker-backend-key-pair\") pod \"whisker-6b6ff97c4-g4jj4\" (UID: \"71738b5b-00d0-40b6-ac2e-dcbe7140012d\") " pod="calico-system/whisker-6b6ff97c4-g4jj4" Oct 29 00:42:30.400420 kubelet[2796]: E1029 00:42:30.399681 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:30.400569 containerd[1632]: time="2025-10-29T00:42:30.400407776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5qxng,Uid:5905455c-a441-499c-8f77-8f1bcb5b5830,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:30.412025 containerd[1632]: time="2025-10-29T00:42:30.410884998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6555bc8b57-6t6f2,Uid:e6e3d24d-0964-48c5-ab21-4abb2f93d132,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:42:30.418216 kubelet[2796]: E1029 00:42:30.418178 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:30.418580 containerd[1632]: time="2025-10-29T00:42:30.418545404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n9tnk,Uid:d50e446a-d926-4232-912c-aaf27bd789fe,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:30.437356 containerd[1632]: time="2025-10-29T00:42:30.437283630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf97f5b86-fqx7t,Uid:63def325-7646-4955-b342-50757e8ccbe9,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:30.440958 containerd[1632]: time="2025-10-29T00:42:30.440912059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579cf9b788-p778f,Uid:0d9ba357-e9fe-4334-aa42-2c44f212b5ae,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:42:30.447883 containerd[1632]: time="2025-10-29T00:42:30.447847273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579cf9b788-8b2jb,Uid:67dad18a-63e2-479c-bc13-d9830637f19e,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:42:30.455025 containerd[1632]: time="2025-10-29T00:42:30.454835687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jvtk4,Uid:94b96309-8719-4f92-83c6-e3ea53662334,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:30.466404 containerd[1632]: time="2025-10-29T00:42:30.466355697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b6ff97c4-g4jj4,Uid:71738b5b-00d0-40b6-ac2e-dcbe7140012d,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:30.557667 kubelet[2796]: E1029 00:42:30.557625 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:30.577011 containerd[1632]: time="2025-10-29T00:42:30.576721093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 29 00:42:30.594198 containerd[1632]: time="2025-10-29T00:42:30.594148676Z" level=error msg="Failed to destroy network for sandbox \"fc19ab91e0443b056ed04821514f03158159b8afb748f00eabcc82b3fa730366\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.600164 containerd[1632]: time="2025-10-29T00:42:30.600115450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n9tnk,Uid:d50e446a-d926-4232-912c-aaf27bd789fe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc19ab91e0443b056ed04821514f03158159b8afb748f00eabcc82b3fa730366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.604200 kubelet[2796]: E1029 00:42:30.600617 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc19ab91e0443b056ed04821514f03158159b8afb748f00eabcc82b3fa730366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.604200 kubelet[2796]: E1029 00:42:30.600720 2796 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc19ab91e0443b056ed04821514f03158159b8afb748f00eabcc82b3fa730366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n9tnk" Oct 29 00:42:30.604200 kubelet[2796]: E1029 00:42:30.600747 2796 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc19ab91e0443b056ed04821514f03158159b8afb748f00eabcc82b3fa730366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n9tnk" Oct 29 00:42:30.604343 kubelet[2796]: E1029 00:42:30.600808 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-n9tnk_kube-system(d50e446a-d926-4232-912c-aaf27bd789fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-n9tnk_kube-system(d50e446a-d926-4232-912c-aaf27bd789fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc19ab91e0443b056ed04821514f03158159b8afb748f00eabcc82b3fa730366\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-n9tnk" podUID="d50e446a-d926-4232-912c-aaf27bd789fe" Oct 29 00:42:30.606867 containerd[1632]: time="2025-10-29T00:42:30.606830911Z" level=error msg="Failed to destroy network for sandbox \"57771c6886fd3d179d446760d4860b233ce48d7c4431504aea374179817909a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.608933 containerd[1632]: time="2025-10-29T00:42:30.608896023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5qxng,Uid:5905455c-a441-499c-8f77-8f1bcb5b5830,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57771c6886fd3d179d446760d4860b233ce48d7c4431504aea374179817909a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.609152 kubelet[2796]: E1029 00:42:30.609113 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57771c6886fd3d179d446760d4860b233ce48d7c4431504aea374179817909a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.609199 kubelet[2796]: E1029 00:42:30.609171 2796 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57771c6886fd3d179d446760d4860b233ce48d7c4431504aea374179817909a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5qxng" Oct 29 00:42:30.609199 kubelet[2796]: E1029 00:42:30.609193 2796 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57771c6886fd3d179d446760d4860b233ce48d7c4431504aea374179817909a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5qxng" Oct 29 00:42:30.609268 kubelet[2796]: E1029 00:42:30.609240 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5qxng_kube-system(5905455c-a441-499c-8f77-8f1bcb5b5830)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5qxng_kube-system(5905455c-a441-499c-8f77-8f1bcb5b5830)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57771c6886fd3d179d446760d4860b233ce48d7c4431504aea374179817909a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5qxng" podUID="5905455c-a441-499c-8f77-8f1bcb5b5830" Oct 29 00:42:30.618552 containerd[1632]: time="2025-10-29T00:42:30.618494201Z" level=error msg="Failed to destroy network for sandbox \"a67d826c419dde97d3545fab70d427da5232bca1980f8de3e12e11f7471ed31f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.622312 containerd[1632]: time="2025-10-29T00:42:30.622272892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6555bc8b57-6t6f2,Uid:e6e3d24d-0964-48c5-ab21-4abb2f93d132,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a67d826c419dde97d3545fab70d427da5232bca1980f8de3e12e11f7471ed31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.622600 kubelet[2796]: E1029 00:42:30.622550 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a67d826c419dde97d3545fab70d427da5232bca1980f8de3e12e11f7471ed31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.622676 kubelet[2796]: E1029 00:42:30.622626 2796 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a67d826c419dde97d3545fab70d427da5232bca1980f8de3e12e11f7471ed31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6555bc8b57-6t6f2" Oct 29 00:42:30.622676 kubelet[2796]: E1029 00:42:30.622650 2796 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a67d826c419dde97d3545fab70d427da5232bca1980f8de3e12e11f7471ed31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6555bc8b57-6t6f2" Oct 29 00:42:30.623230 kubelet[2796]: E1029 00:42:30.622821 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6555bc8b57-6t6f2_calico-apiserver(e6e3d24d-0964-48c5-ab21-4abb2f93d132)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6555bc8b57-6t6f2_calico-apiserver(e6e3d24d-0964-48c5-ab21-4abb2f93d132)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a67d826c419dde97d3545fab70d427da5232bca1980f8de3e12e11f7471ed31f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6555bc8b57-6t6f2" podUID="e6e3d24d-0964-48c5-ab21-4abb2f93d132" Oct 29 00:42:30.649528 containerd[1632]: time="2025-10-29T00:42:30.649478322Z" level=error msg="Failed to destroy network for sandbox \"be34c658e4c292a842588f34a41e4ca2f5757e848edcc9fa924c932d65033ebb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.653548 containerd[1632]: time="2025-10-29T00:42:30.653260520Z" level=error msg="Failed to destroy network for sandbox \"448e7421d5c6272d9821f5dbf8090b06876332af4773be1f0aaf70da897e09b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.654167 containerd[1632]: time="2025-10-29T00:42:30.653610797Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b6ff97c4-g4jj4,Uid:71738b5b-00d0-40b6-ac2e-dcbe7140012d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be34c658e4c292a842588f34a41e4ca2f5757e848edcc9fa924c932d65033ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.654335 kubelet[2796]: E1029 00:42:30.653984 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be34c658e4c292a842588f34a41e4ca2f5757e848edcc9fa924c932d65033ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.654335 kubelet[2796]: E1029 00:42:30.654136 2796 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be34c658e4c292a842588f34a41e4ca2f5757e848edcc9fa924c932d65033ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b6ff97c4-g4jj4" Oct 29 00:42:30.654335 kubelet[2796]: E1029 00:42:30.654174 2796 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be34c658e4c292a842588f34a41e4ca2f5757e848edcc9fa924c932d65033ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b6ff97c4-g4jj4" Oct 29 00:42:30.654487 kubelet[2796]: E1029 00:42:30.654223 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6b6ff97c4-g4jj4_calico-system(71738b5b-00d0-40b6-ac2e-dcbe7140012d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6b6ff97c4-g4jj4_calico-system(71738b5b-00d0-40b6-ac2e-dcbe7140012d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be34c658e4c292a842588f34a41e4ca2f5757e848edcc9fa924c932d65033ebb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b6ff97c4-g4jj4" podUID="71738b5b-00d0-40b6-ac2e-dcbe7140012d" Oct 29 00:42:30.655193 containerd[1632]: time="2025-10-29T00:42:30.655150381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jvtk4,Uid:94b96309-8719-4f92-83c6-e3ea53662334,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"448e7421d5c6272d9821f5dbf8090b06876332af4773be1f0aaf70da897e09b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.656050 kubelet[2796]: E1029 00:42:30.655402 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"448e7421d5c6272d9821f5dbf8090b06876332af4773be1f0aaf70da897e09b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.656050 kubelet[2796]: E1029 00:42:30.655470 2796 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"448e7421d5c6272d9821f5dbf8090b06876332af4773be1f0aaf70da897e09b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jvtk4" Oct 29 00:42:30.656050 kubelet[2796]: E1029 00:42:30.655485 2796 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"448e7421d5c6272d9821f5dbf8090b06876332af4773be1f0aaf70da897e09b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jvtk4" Oct 29 00:42:30.656177 kubelet[2796]: E1029 00:42:30.655562 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-jvtk4_calico-system(94b96309-8719-4f92-83c6-e3ea53662334)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-jvtk4_calico-system(94b96309-8719-4f92-83c6-e3ea53662334)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"448e7421d5c6272d9821f5dbf8090b06876332af4773be1f0aaf70da897e09b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-jvtk4" podUID="94b96309-8719-4f92-83c6-e3ea53662334" Oct 29 00:42:30.678085 containerd[1632]: time="2025-10-29T00:42:30.678020973Z" level=error msg="Failed to destroy network for sandbox \"da3c0409d7afed85a4567c3a68cfa6b89c6a78033b7ad65673003499e69ccd08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.679611 containerd[1632]: time="2025-10-29T00:42:30.679541751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579cf9b788-p778f,Uid:0d9ba357-e9fe-4334-aa42-2c44f212b5ae,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"da3c0409d7afed85a4567c3a68cfa6b89c6a78033b7ad65673003499e69ccd08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.680094 kubelet[2796]: E1029 00:42:30.680047 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da3c0409d7afed85a4567c3a68cfa6b89c6a78033b7ad65673003499e69ccd08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.680371 kubelet[2796]: E1029 00:42:30.680343 2796 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da3c0409d7afed85a4567c3a68cfa6b89c6a78033b7ad65673003499e69ccd08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-579cf9b788-p778f" Oct 29 00:42:30.680464 kubelet[2796]: E1029 00:42:30.680450 2796 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da3c0409d7afed85a4567c3a68cfa6b89c6a78033b7ad65673003499e69ccd08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-579cf9b788-p778f" Oct 29 00:42:30.680688 kubelet[2796]: E1029 00:42:30.680631 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-579cf9b788-p778f_calico-apiserver(0d9ba357-e9fe-4334-aa42-2c44f212b5ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-579cf9b788-p778f_calico-apiserver(0d9ba357-e9fe-4334-aa42-2c44f212b5ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da3c0409d7afed85a4567c3a68cfa6b89c6a78033b7ad65673003499e69ccd08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-579cf9b788-p778f" podUID="0d9ba357-e9fe-4334-aa42-2c44f212b5ae" Oct 29 00:42:30.686051 containerd[1632]: time="2025-10-29T00:42:30.685981193Z" level=error msg="Failed to destroy network for sandbox \"c785396dcc28ecdbf0018960aaa365c8fcb687c1b01b0a341f24e65ce29beafe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.687336 containerd[1632]: time="2025-10-29T00:42:30.687228107Z" level=error msg="Failed to destroy network for sandbox \"e20becd43fb39269a2155f7c308806c2117f4a42c451d518d1cb289ae74cce9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.687506 containerd[1632]: time="2025-10-29T00:42:30.687475441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579cf9b788-8b2jb,Uid:67dad18a-63e2-479c-bc13-d9830637f19e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c785396dcc28ecdbf0018960aaa365c8fcb687c1b01b0a341f24e65ce29beafe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.687798 kubelet[2796]: E1029 00:42:30.687750 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c785396dcc28ecdbf0018960aaa365c8fcb687c1b01b0a341f24e65ce29beafe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.687885 kubelet[2796]: E1029 00:42:30.687835 2796 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c785396dcc28ecdbf0018960aaa365c8fcb687c1b01b0a341f24e65ce29beafe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-579cf9b788-8b2jb" Oct 29 00:42:30.687885 kubelet[2796]: E1029 00:42:30.687858 2796 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c785396dcc28ecdbf0018960aaa365c8fcb687c1b01b0a341f24e65ce29beafe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-579cf9b788-8b2jb" Oct 29 00:42:30.687972 kubelet[2796]: E1029 00:42:30.687916 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-579cf9b788-8b2jb_calico-apiserver(67dad18a-63e2-479c-bc13-d9830637f19e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-579cf9b788-8b2jb_calico-apiserver(67dad18a-63e2-479c-bc13-d9830637f19e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c785396dcc28ecdbf0018960aaa365c8fcb687c1b01b0a341f24e65ce29beafe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-579cf9b788-8b2jb" podUID="67dad18a-63e2-479c-bc13-d9830637f19e" Oct 29 00:42:30.688388 containerd[1632]: time="2025-10-29T00:42:30.688357600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf97f5b86-fqx7t,Uid:63def325-7646-4955-b342-50757e8ccbe9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e20becd43fb39269a2155f7c308806c2117f4a42c451d518d1cb289ae74cce9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.688635 kubelet[2796]: E1029 00:42:30.688596 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e20becd43fb39269a2155f7c308806c2117f4a42c451d518d1cb289ae74cce9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:30.688749 kubelet[2796]: E1029 00:42:30.688734 2796 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e20becd43fb39269a2155f7c308806c2117f4a42c451d518d1cb289ae74cce9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cf97f5b86-fqx7t" Oct 29 00:42:30.688845 kubelet[2796]: E1029 00:42:30.688819 2796 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e20becd43fb39269a2155f7c308806c2117f4a42c451d518d1cb289ae74cce9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cf97f5b86-fqx7t" Oct 29 00:42:30.688929 kubelet[2796]: E1029 00:42:30.688887 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cf97f5b86-fqx7t_calico-system(63def325-7646-4955-b342-50757e8ccbe9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cf97f5b86-fqx7t_calico-system(63def325-7646-4955-b342-50757e8ccbe9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e20becd43fb39269a2155f7c308806c2117f4a42c451d518d1cb289ae74cce9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cf97f5b86-fqx7t" podUID="63def325-7646-4955-b342-50757e8ccbe9" Oct 29 00:42:31.454732 systemd[1]: Created slice kubepods-besteffort-pod06790988_73f1_4592_ba5d_833c8bb13f59.slice - libcontainer container kubepods-besteffort-pod06790988_73f1_4592_ba5d_833c8bb13f59.slice. Oct 29 00:42:31.457186 containerd[1632]: time="2025-10-29T00:42:31.457149536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfhx9,Uid:06790988-73f1-4592-ba5d-833c8bb13f59,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:31.512258 containerd[1632]: time="2025-10-29T00:42:31.512203087Z" level=error msg="Failed to destroy network for sandbox \"91c8783c4ca5b73a8a127d55136d2d3d5f4569645a8b3d5e1a13e993ab62049d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:31.513887 containerd[1632]: time="2025-10-29T00:42:31.513765213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfhx9,Uid:06790988-73f1-4592-ba5d-833c8bb13f59,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91c8783c4ca5b73a8a127d55136d2d3d5f4569645a8b3d5e1a13e993ab62049d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:31.514172 kubelet[2796]: E1029 00:42:31.514120 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91c8783c4ca5b73a8a127d55136d2d3d5f4569645a8b3d5e1a13e993ab62049d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:31.514535 kubelet[2796]: E1029 00:42:31.514205 2796 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91c8783c4ca5b73a8a127d55136d2d3d5f4569645a8b3d5e1a13e993ab62049d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dfhx9" Oct 29 00:42:31.514535 kubelet[2796]: E1029 00:42:31.514239 2796 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91c8783c4ca5b73a8a127d55136d2d3d5f4569645a8b3d5e1a13e993ab62049d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dfhx9" Oct 29 00:42:31.514535 kubelet[2796]: E1029 00:42:31.514309 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dfhx9_calico-system(06790988-73f1-4592-ba5d-833c8bb13f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dfhx9_calico-system(06790988-73f1-4592-ba5d-833c8bb13f59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91c8783c4ca5b73a8a127d55136d2d3d5f4569645a8b3d5e1a13e993ab62049d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dfhx9" podUID="06790988-73f1-4592-ba5d-833c8bb13f59" Oct 29 00:42:31.515161 systemd[1]: run-netns-cni\x2d4fa1fd64\x2d2206\x2d6442\x2d1940\x2d2e43e6283be1.mount: Deactivated successfully. Oct 29 00:42:40.222597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992827343.mount: Deactivated successfully. Oct 29 00:42:41.072330 systemd[1]: Started sshd@7-10.0.0.77:22-10.0.0.1:54216.service - OpenSSH per-connection server daemon (10.0.0.1:54216). Oct 29 00:42:41.150056 containerd[1632]: time="2025-10-29T00:42:41.149579929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 29 00:42:41.157772 containerd[1632]: time="2025-10-29T00:42:41.157151644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:41.160088 containerd[1632]: time="2025-10-29T00:42:41.159053825Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:41.160484 containerd[1632]: time="2025-10-29T00:42:41.159827347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.583064073s" Oct 29 00:42:41.160535 containerd[1632]: time="2025-10-29T00:42:41.160477949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 29 00:42:41.160977 containerd[1632]: time="2025-10-29T00:42:41.160944734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:41.185142 containerd[1632]: time="2025-10-29T00:42:41.185086867Z" level=info msg="CreateContainer within sandbox \"54f28b6baaf8b6f2dbdd59d300787360db26219933e42bf9c6487dcdfb0ca45e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 29 00:42:41.190811 sshd[3929]: Accepted publickey for core from 10.0.0.1 port 54216 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:41.192489 sshd-session[3929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:41.197298 containerd[1632]: time="2025-10-29T00:42:41.197257325Z" level=info msg="Container 969a8899f9df5d803af8a3b060eadac64d8c2068eb3bedf0348f51d12260d096: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:42:41.202644 systemd-logind[1616]: New session 8 of user core. Oct 29 00:42:41.209949 containerd[1632]: time="2025-10-29T00:42:41.209890442Z" level=info msg="CreateContainer within sandbox \"54f28b6baaf8b6f2dbdd59d300787360db26219933e42bf9c6487dcdfb0ca45e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"969a8899f9df5d803af8a3b060eadac64d8c2068eb3bedf0348f51d12260d096\"" Oct 29 00:42:41.210413 containerd[1632]: time="2025-10-29T00:42:41.210390170Z" level=info msg="StartContainer for \"969a8899f9df5d803af8a3b060eadac64d8c2068eb3bedf0348f51d12260d096\"" Oct 29 00:42:41.212152 containerd[1632]: time="2025-10-29T00:42:41.212121361Z" level=info msg="connecting to shim 969a8899f9df5d803af8a3b060eadac64d8c2068eb3bedf0348f51d12260d096" address="unix:///run/containerd/s/a1358ea371e21275f9685dcc11920f16ec6aeb36cea47f2700b706da4e49de3b" protocol=ttrpc version=3 Oct 29 00:42:41.218236 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 29 00:42:41.244227 systemd[1]: Started cri-containerd-969a8899f9df5d803af8a3b060eadac64d8c2068eb3bedf0348f51d12260d096.scope - libcontainer container 969a8899f9df5d803af8a3b060eadac64d8c2068eb3bedf0348f51d12260d096. Oct 29 00:42:41.312700 containerd[1632]: time="2025-10-29T00:42:41.312600382Z" level=info msg="StartContainer for \"969a8899f9df5d803af8a3b060eadac64d8c2068eb3bedf0348f51d12260d096\" returns successfully" Oct 29 00:42:41.405986 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 29 00:42:41.406254 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 29 00:42:41.416972 sshd[3946]: Connection closed by 10.0.0.1 port 54216 Oct 29 00:42:41.416537 sshd-session[3929]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:41.423678 systemd[1]: sshd@7-10.0.0.77:22-10.0.0.1:54216.service: Deactivated successfully. Oct 29 00:42:41.426888 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 00:42:41.427912 systemd-logind[1616]: Session 8 logged out. Waiting for processes to exit. Oct 29 00:42:41.430235 systemd-logind[1616]: Removed session 8. Oct 29 00:42:41.448493 kubelet[2796]: E1029 00:42:41.448455 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:41.449587 containerd[1632]: time="2025-10-29T00:42:41.449160461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5qxng,Uid:5905455c-a441-499c-8f77-8f1bcb5b5830,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:41.516323 containerd[1632]: time="2025-10-29T00:42:41.515707681Z" level=error msg="Failed to destroy network for sandbox \"a19a8672a0a299efc330426cf16cf03cc3dfb9dc6cebd4196ae54c92576af5fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:41.521194 systemd[1]: run-netns-cni\x2d9bdcf26e\x2d9030\x2df343\x2dd8ec\x2ddb97cb709d40.mount: Deactivated successfully. Oct 29 00:42:41.522626 containerd[1632]: time="2025-10-29T00:42:41.522536861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5qxng,Uid:5905455c-a441-499c-8f77-8f1bcb5b5830,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a19a8672a0a299efc330426cf16cf03cc3dfb9dc6cebd4196ae54c92576af5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:41.524531 kubelet[2796]: E1029 00:42:41.524473 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a19a8672a0a299efc330426cf16cf03cc3dfb9dc6cebd4196ae54c92576af5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:41.524655 kubelet[2796]: E1029 00:42:41.524611 2796 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a19a8672a0a299efc330426cf16cf03cc3dfb9dc6cebd4196ae54c92576af5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5qxng" Oct 29 00:42:41.524704 kubelet[2796]: E1029 00:42:41.524679 2796 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a19a8672a0a299efc330426cf16cf03cc3dfb9dc6cebd4196ae54c92576af5fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5qxng" Oct 29 00:42:41.524817 kubelet[2796]: E1029 00:42:41.524767 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5qxng_kube-system(5905455c-a441-499c-8f77-8f1bcb5b5830)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5qxng_kube-system(5905455c-a441-499c-8f77-8f1bcb5b5830)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a19a8672a0a299efc330426cf16cf03cc3dfb9dc6cebd4196ae54c92576af5fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5qxng" podUID="5905455c-a441-499c-8f77-8f1bcb5b5830" Oct 29 00:42:41.592344 kubelet[2796]: E1029 00:42:41.592243 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:41.658489 kubelet[2796]: I1029 00:42:41.658118 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/71738b5b-00d0-40b6-ac2e-dcbe7140012d-whisker-backend-key-pair\") pod \"71738b5b-00d0-40b6-ac2e-dcbe7140012d\" (UID: \"71738b5b-00d0-40b6-ac2e-dcbe7140012d\") " Oct 29 00:42:41.658489 kubelet[2796]: I1029 00:42:41.658197 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb89g\" (UniqueName: \"kubernetes.io/projected/71738b5b-00d0-40b6-ac2e-dcbe7140012d-kube-api-access-qb89g\") pod \"71738b5b-00d0-40b6-ac2e-dcbe7140012d\" (UID: \"71738b5b-00d0-40b6-ac2e-dcbe7140012d\") " Oct 29 00:42:41.658489 kubelet[2796]: I1029 00:42:41.658247 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71738b5b-00d0-40b6-ac2e-dcbe7140012d-whisker-ca-bundle\") pod \"71738b5b-00d0-40b6-ac2e-dcbe7140012d\" (UID: \"71738b5b-00d0-40b6-ac2e-dcbe7140012d\") " Oct 29 00:42:41.668524 kubelet[2796]: I1029 00:42:41.668380 2796 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71738b5b-00d0-40b6-ac2e-dcbe7140012d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "71738b5b-00d0-40b6-ac2e-dcbe7140012d" (UID: "71738b5b-00d0-40b6-ac2e-dcbe7140012d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 00:42:41.672753 systemd[1]: var-lib-kubelet-pods-71738b5b\x2d00d0\x2d40b6\x2dac2e\x2ddcbe7140012d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqb89g.mount: Deactivated successfully. Oct 29 00:42:41.672945 kubelet[2796]: I1029 00:42:41.672922 2796 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71738b5b-00d0-40b6-ac2e-dcbe7140012d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "71738b5b-00d0-40b6-ac2e-dcbe7140012d" (UID: "71738b5b-00d0-40b6-ac2e-dcbe7140012d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 00:42:41.673727 systemd[1]: var-lib-kubelet-pods-71738b5b\x2d00d0\x2d40b6\x2dac2e\x2ddcbe7140012d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 29 00:42:41.675280 kubelet[2796]: I1029 00:42:41.675243 2796 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71738b5b-00d0-40b6-ac2e-dcbe7140012d-kube-api-access-qb89g" (OuterVolumeSpecName: "kube-api-access-qb89g") pod "71738b5b-00d0-40b6-ac2e-dcbe7140012d" (UID: "71738b5b-00d0-40b6-ac2e-dcbe7140012d"). InnerVolumeSpecName "kube-api-access-qb89g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 00:42:41.756675 containerd[1632]: time="2025-10-29T00:42:41.756610965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"969a8899f9df5d803af8a3b060eadac64d8c2068eb3bedf0348f51d12260d096\" id:\"60f1a2eb7ba5152bea0205e1dadf1d948f47a96bbf4f690200185e4aee6a9c67\" pid:4045 exit_status:1 exited_at:{seconds:1761698561 nanos:756241903}" Oct 29 00:42:41.759480 kubelet[2796]: I1029 00:42:41.759432 2796 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/71738b5b-00d0-40b6-ac2e-dcbe7140012d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 29 00:42:41.759480 kubelet[2796]: I1029 00:42:41.759469 2796 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qb89g\" (UniqueName: \"kubernetes.io/projected/71738b5b-00d0-40b6-ac2e-dcbe7140012d-kube-api-access-qb89g\") on node \"localhost\" DevicePath \"\"" Oct 29 00:42:41.759480 kubelet[2796]: I1029 00:42:41.759478 2796 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71738b5b-00d0-40b6-ac2e-dcbe7140012d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 29 00:42:42.591849 kubelet[2796]: E1029 00:42:42.591379 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:42.598444 systemd[1]: Removed slice kubepods-besteffort-pod71738b5b_00d0_40b6_ac2e_dcbe7140012d.slice - libcontainer container kubepods-besteffort-pod71738b5b_00d0_40b6_ac2e_dcbe7140012d.slice. Oct 29 00:42:42.614027 kubelet[2796]: I1029 00:42:42.612440 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vlhss" podStartSLOduration=3.247153616 podStartE2EDuration="24.612419303s" podCreationTimestamp="2025-10-29 00:42:18 +0000 UTC" firstStartedPulling="2025-10-29 00:42:19.797386684 +0000 UTC m=+26.451834645" lastFinishedPulling="2025-10-29 00:42:41.162652371 +0000 UTC m=+47.817100332" observedRunningTime="2025-10-29 00:42:41.628471055 +0000 UTC m=+48.282919026" watchObservedRunningTime="2025-10-29 00:42:42.612419303 +0000 UTC m=+49.266867264" Oct 29 00:42:42.671788 systemd[1]: Created slice kubepods-besteffort-podec6228cd_f4f8_4d8b_9e13_5218fd64e5d0.slice - libcontainer container kubepods-besteffort-podec6228cd_f4f8_4d8b_9e13_5218fd64e5d0.slice. Oct 29 00:42:42.720629 containerd[1632]: time="2025-10-29T00:42:42.720579451Z" level=info msg="TaskExit event in podsandbox handler container_id:\"969a8899f9df5d803af8a3b060eadac64d8c2068eb3bedf0348f51d12260d096\" id:\"1e61a1ad432c2126f7ff1ad459e93f9d45fc087c540b87d942ed40f3a8018e80\" pid:4085 exit_status:1 exited_at:{seconds:1761698562 nanos:720189389}" Oct 29 00:42:42.766218 kubelet[2796]: I1029 00:42:42.766163 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0-whisker-backend-key-pair\") pod \"whisker-76ccf55dd8-7n2c9\" (UID: \"ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0\") " pod="calico-system/whisker-76ccf55dd8-7n2c9" Oct 29 00:42:42.766218 kubelet[2796]: I1029 00:42:42.766205 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxwlk\" (UniqueName: \"kubernetes.io/projected/ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0-kube-api-access-nxwlk\") pod \"whisker-76ccf55dd8-7n2c9\" (UID: \"ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0\") " pod="calico-system/whisker-76ccf55dd8-7n2c9" Oct 29 00:42:42.766218 kubelet[2796]: I1029 00:42:42.766225 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0-whisker-ca-bundle\") pod \"whisker-76ccf55dd8-7n2c9\" (UID: \"ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0\") " pod="calico-system/whisker-76ccf55dd8-7n2c9" Oct 29 00:42:42.980032 containerd[1632]: time="2025-10-29T00:42:42.979892425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76ccf55dd8-7n2c9,Uid:ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:43.150530 systemd-networkd[1540]: cali05b78fad6f6: Link UP Oct 29 00:42:43.151225 systemd-networkd[1540]: cali05b78fad6f6: Gained carrier Oct 29 00:42:43.167154 containerd[1632]: 2025-10-29 00:42:43.008 [INFO][4199] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:42:43.167154 containerd[1632]: 2025-10-29 00:42:43.034 [INFO][4199] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--76ccf55dd8--7n2c9-eth0 whisker-76ccf55dd8- calico-system ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0 984 0 2025-10-29 00:42:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76ccf55dd8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-76ccf55dd8-7n2c9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali05b78fad6f6 [] [] }} ContainerID="6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" Namespace="calico-system" Pod="whisker-76ccf55dd8-7n2c9" WorkloadEndpoint="localhost-k8s-whisker--76ccf55dd8--7n2c9-" Oct 29 00:42:43.167154 containerd[1632]: 2025-10-29 00:42:43.035 [INFO][4199] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" Namespace="calico-system" Pod="whisker-76ccf55dd8-7n2c9" WorkloadEndpoint="localhost-k8s-whisker--76ccf55dd8--7n2c9-eth0" Oct 29 00:42:43.167154 containerd[1632]: 2025-10-29 00:42:43.104 [INFO][4213] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" HandleID="k8s-pod-network.6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" Workload="localhost-k8s-whisker--76ccf55dd8--7n2c9-eth0" Oct 29 00:42:43.167420 containerd[1632]: 2025-10-29 00:42:43.105 [INFO][4213] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" HandleID="k8s-pod-network.6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" Workload="localhost-k8s-whisker--76ccf55dd8--7n2c9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123710), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-76ccf55dd8-7n2c9", "timestamp":"2025-10-29 00:42:43.104516096 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:43.167420 containerd[1632]: 2025-10-29 00:42:43.105 [INFO][4213] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:43.167420 containerd[1632]: 2025-10-29 00:42:43.105 [INFO][4213] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:43.167420 containerd[1632]: 2025-10-29 00:42:43.106 [INFO][4213] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:43.167420 containerd[1632]: 2025-10-29 00:42:43.114 [INFO][4213] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" host="localhost" Oct 29 00:42:43.167420 containerd[1632]: 2025-10-29 00:42:43.119 [INFO][4213] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:43.167420 containerd[1632]: 2025-10-29 00:42:43.123 [INFO][4213] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:43.167420 containerd[1632]: 2025-10-29 00:42:43.124 [INFO][4213] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:43.167420 containerd[1632]: 2025-10-29 00:42:43.126 [INFO][4213] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:43.167420 containerd[1632]: 2025-10-29 00:42:43.126 [INFO][4213] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" host="localhost" Oct 29 00:42:43.167632 containerd[1632]: 2025-10-29 00:42:43.127 [INFO][4213] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1 Oct 29 00:42:43.167632 containerd[1632]: 2025-10-29 00:42:43.132 [INFO][4213] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" host="localhost" Oct 29 00:42:43.167632 containerd[1632]: 2025-10-29 00:42:43.137 [INFO][4213] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" host="localhost" Oct 29 00:42:43.167632 containerd[1632]: 2025-10-29 00:42:43.137 [INFO][4213] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" host="localhost" Oct 29 00:42:43.167632 containerd[1632]: 2025-10-29 00:42:43.137 [INFO][4213] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:43.167632 containerd[1632]: 2025-10-29 00:42:43.137 [INFO][4213] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" HandleID="k8s-pod-network.6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" Workload="localhost-k8s-whisker--76ccf55dd8--7n2c9-eth0" Oct 29 00:42:43.167754 containerd[1632]: 2025-10-29 00:42:43.140 [INFO][4199] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" Namespace="calico-system" Pod="whisker-76ccf55dd8-7n2c9" WorkloadEndpoint="localhost-k8s-whisker--76ccf55dd8--7n2c9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76ccf55dd8--7n2c9-eth0", GenerateName:"whisker-76ccf55dd8-", Namespace:"calico-system", SelfLink:"", UID:"ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76ccf55dd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-76ccf55dd8-7n2c9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali05b78fad6f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:43.167754 containerd[1632]: 2025-10-29 00:42:43.140 [INFO][4199] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" Namespace="calico-system" Pod="whisker-76ccf55dd8-7n2c9" WorkloadEndpoint="localhost-k8s-whisker--76ccf55dd8--7n2c9-eth0" Oct 29 00:42:43.167823 containerd[1632]: 2025-10-29 00:42:43.141 [INFO][4199] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05b78fad6f6 ContainerID="6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" Namespace="calico-system" Pod="whisker-76ccf55dd8-7n2c9" WorkloadEndpoint="localhost-k8s-whisker--76ccf55dd8--7n2c9-eth0" Oct 29 00:42:43.167823 containerd[1632]: 2025-10-29 00:42:43.151 [INFO][4199] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" Namespace="calico-system" Pod="whisker-76ccf55dd8-7n2c9" WorkloadEndpoint="localhost-k8s-whisker--76ccf55dd8--7n2c9-eth0" Oct 29 00:42:43.167866 containerd[1632]: 2025-10-29 00:42:43.152 [INFO][4199] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" Namespace="calico-system" Pod="whisker-76ccf55dd8-7n2c9" WorkloadEndpoint="localhost-k8s-whisker--76ccf55dd8--7n2c9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76ccf55dd8--7n2c9-eth0", GenerateName:"whisker-76ccf55dd8-", Namespace:"calico-system", SelfLink:"", UID:"ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76ccf55dd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1", Pod:"whisker-76ccf55dd8-7n2c9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali05b78fad6f6", MAC:"e2:c7:a8:11:eb:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:43.167917 containerd[1632]: 2025-10-29 00:42:43.162 [INFO][4199] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" Namespace="calico-system" Pod="whisker-76ccf55dd8-7n2c9" WorkloadEndpoint="localhost-k8s-whisker--76ccf55dd8--7n2c9-eth0" Oct 29 00:42:43.311596 containerd[1632]: time="2025-10-29T00:42:43.311446193Z" level=info msg="connecting to shim 6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1" address="unix:///run/containerd/s/b8a4d95fda367d1ecf678d852e290033c134129da7639710c92204730372cc71" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:43.404148 systemd[1]: Started cri-containerd-6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1.scope - libcontainer container 6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1. Oct 29 00:42:43.416326 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:43.449577 containerd[1632]: time="2025-10-29T00:42:43.449497317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6555bc8b57-6t6f2,Uid:e6e3d24d-0964-48c5-ab21-4abb2f93d132,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:42:43.449874 containerd[1632]: time="2025-10-29T00:42:43.449730304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jvtk4,Uid:94b96309-8719-4f92-83c6-e3ea53662334,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:43.452061 kubelet[2796]: I1029 00:42:43.452034 2796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71738b5b-00d0-40b6-ac2e-dcbe7140012d" path="/var/lib/kubelet/pods/71738b5b-00d0-40b6-ac2e-dcbe7140012d/volumes" Oct 29 00:42:43.511683 containerd[1632]: time="2025-10-29T00:42:43.511624736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76ccf55dd8-7n2c9,Uid:ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"6758fbb5cfcdee77900d66f0469ba70170316487aa95d897982b3e4e5eb176a1\"" Oct 29 00:42:43.519323 containerd[1632]: time="2025-10-29T00:42:43.519205806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:42:43.612659 systemd-networkd[1540]: caliae43be3c084: Link UP Oct 29 00:42:43.614244 systemd-networkd[1540]: caliae43be3c084: Gained carrier Oct 29 00:42:43.654443 containerd[1632]: 2025-10-29 00:42:43.510 [INFO][4273] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:42:43.654443 containerd[1632]: 2025-10-29 00:42:43.534 [INFO][4273] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6555bc8b57--6t6f2-eth0 calico-apiserver-6555bc8b57- calico-apiserver e6e3d24d-0964-48c5-ab21-4abb2f93d132 867 0 2025-10-29 00:42:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6555bc8b57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6555bc8b57-6t6f2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliae43be3c084 [] [] }} ContainerID="56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" Namespace="calico-apiserver" Pod="calico-apiserver-6555bc8b57-6t6f2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6555bc8b57--6t6f2-" Oct 29 00:42:43.654443 containerd[1632]: 2025-10-29 00:42:43.534 [INFO][4273] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" Namespace="calico-apiserver" Pod="calico-apiserver-6555bc8b57-6t6f2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6555bc8b57--6t6f2-eth0" Oct 29 00:42:43.654443 containerd[1632]: 2025-10-29 00:42:43.568 [INFO][4302] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" HandleID="k8s-pod-network.56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" Workload="localhost-k8s-calico--apiserver--6555bc8b57--6t6f2-eth0" Oct 29 00:42:43.654703 containerd[1632]: 2025-10-29 00:42:43.568 [INFO][4302] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" HandleID="k8s-pod-network.56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" Workload="localhost-k8s-calico--apiserver--6555bc8b57--6t6f2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4dd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6555bc8b57-6t6f2", "timestamp":"2025-10-29 00:42:43.568114563 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:43.654703 containerd[1632]: 2025-10-29 00:42:43.568 [INFO][4302] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:43.654703 containerd[1632]: 2025-10-29 00:42:43.568 [INFO][4302] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:43.654703 containerd[1632]: 2025-10-29 00:42:43.568 [INFO][4302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:43.654703 containerd[1632]: 2025-10-29 00:42:43.574 [INFO][4302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" host="localhost" Oct 29 00:42:43.654703 containerd[1632]: 2025-10-29 00:42:43.581 [INFO][4302] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:43.654703 containerd[1632]: 2025-10-29 00:42:43.584 [INFO][4302] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:43.654703 containerd[1632]: 2025-10-29 00:42:43.586 [INFO][4302] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:43.654703 containerd[1632]: 2025-10-29 00:42:43.588 [INFO][4302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:43.654703 containerd[1632]: 2025-10-29 00:42:43.588 [INFO][4302] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" host="localhost" Oct 29 00:42:43.654923 containerd[1632]: 2025-10-29 00:42:43.589 [INFO][4302] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb Oct 29 00:42:43.654923 containerd[1632]: 2025-10-29 00:42:43.593 [INFO][4302] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" host="localhost" Oct 29 00:42:43.654923 containerd[1632]: 2025-10-29 00:42:43.603 [INFO][4302] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" host="localhost" Oct 29 00:42:43.654923 containerd[1632]: 2025-10-29 00:42:43.603 [INFO][4302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" host="localhost" Oct 29 00:42:43.654923 containerd[1632]: 2025-10-29 00:42:43.603 [INFO][4302] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:43.654923 containerd[1632]: 2025-10-29 00:42:43.603 [INFO][4302] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" HandleID="k8s-pod-network.56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" Workload="localhost-k8s-calico--apiserver--6555bc8b57--6t6f2-eth0" Oct 29 00:42:43.655135 containerd[1632]: 2025-10-29 00:42:43.607 [INFO][4273] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" Namespace="calico-apiserver" Pod="calico-apiserver-6555bc8b57-6t6f2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6555bc8b57--6t6f2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6555bc8b57--6t6f2-eth0", GenerateName:"calico-apiserver-6555bc8b57-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6e3d24d-0964-48c5-ab21-4abb2f93d132", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6555bc8b57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6555bc8b57-6t6f2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae43be3c084", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:43.655186 containerd[1632]: 2025-10-29 00:42:43.607 [INFO][4273] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" Namespace="calico-apiserver" Pod="calico-apiserver-6555bc8b57-6t6f2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6555bc8b57--6t6f2-eth0" Oct 29 00:42:43.655186 containerd[1632]: 2025-10-29 00:42:43.607 [INFO][4273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae43be3c084 ContainerID="56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" Namespace="calico-apiserver" Pod="calico-apiserver-6555bc8b57-6t6f2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6555bc8b57--6t6f2-eth0" Oct 29 00:42:43.655186 containerd[1632]: 2025-10-29 00:42:43.613 [INFO][4273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" Namespace="calico-apiserver" Pod="calico-apiserver-6555bc8b57-6t6f2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6555bc8b57--6t6f2-eth0" Oct 29 00:42:43.655256 containerd[1632]: 2025-10-29 00:42:43.616 [INFO][4273] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" Namespace="calico-apiserver" Pod="calico-apiserver-6555bc8b57-6t6f2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6555bc8b57--6t6f2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6555bc8b57--6t6f2-eth0", GenerateName:"calico-apiserver-6555bc8b57-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6e3d24d-0964-48c5-ab21-4abb2f93d132", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6555bc8b57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb", Pod:"calico-apiserver-6555bc8b57-6t6f2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae43be3c084", MAC:"c6:4c:f1:d6:de:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:43.655306 containerd[1632]: 2025-10-29 00:42:43.651 [INFO][4273] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" Namespace="calico-apiserver" Pod="calico-apiserver-6555bc8b57-6t6f2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6555bc8b57--6t6f2-eth0" Oct 29 00:42:43.711783 systemd-networkd[1540]: caliec18eb4de84: Link UP Oct 29 00:42:43.712307 systemd-networkd[1540]: caliec18eb4de84: Gained carrier Oct 29 00:42:43.727037 containerd[1632]: 2025-10-29 00:42:43.550 [INFO][4287] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:42:43.727037 containerd[1632]: 2025-10-29 00:42:43.563 [INFO][4287] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--jvtk4-eth0 goldmane-666569f655- calico-system 94b96309-8719-4f92-83c6-e3ea53662334 871 0 2025-10-29 00:42:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-jvtk4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliec18eb4de84 [] [] }} ContainerID="0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" Namespace="calico-system" Pod="goldmane-666569f655-jvtk4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jvtk4-" Oct 29 00:42:43.727037 containerd[1632]: 2025-10-29 00:42:43.564 [INFO][4287] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" Namespace="calico-system" Pod="goldmane-666569f655-jvtk4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jvtk4-eth0" Oct 29 00:42:43.727037 containerd[1632]: 2025-10-29 00:42:43.604 [INFO][4311] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" HandleID="k8s-pod-network.0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" Workload="localhost-k8s-goldmane--666569f655--jvtk4-eth0" Oct 29 00:42:43.727547 containerd[1632]: 2025-10-29 00:42:43.605 [INFO][4311] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" HandleID="k8s-pod-network.0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" Workload="localhost-k8s-goldmane--666569f655--jvtk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7170), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-jvtk4", "timestamp":"2025-10-29 00:42:43.604826353 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:43.727547 containerd[1632]: 2025-10-29 00:42:43.605 [INFO][4311] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:43.727547 containerd[1632]: 2025-10-29 00:42:43.605 [INFO][4311] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:43.727547 containerd[1632]: 2025-10-29 00:42:43.605 [INFO][4311] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:43.727547 containerd[1632]: 2025-10-29 00:42:43.676 [INFO][4311] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" host="localhost" Oct 29 00:42:43.727547 containerd[1632]: 2025-10-29 00:42:43.681 [INFO][4311] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:43.727547 containerd[1632]: 2025-10-29 00:42:43.684 [INFO][4311] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:43.727547 containerd[1632]: 2025-10-29 00:42:43.686 [INFO][4311] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:43.727547 containerd[1632]: 2025-10-29 00:42:43.688 [INFO][4311] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:43.727547 containerd[1632]: 2025-10-29 00:42:43.688 [INFO][4311] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" host="localhost" Oct 29 00:42:43.727802 containerd[1632]: 2025-10-29 00:42:43.689 [INFO][4311] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d Oct 29 00:42:43.727802 containerd[1632]: 2025-10-29 00:42:43.701 [INFO][4311] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" host="localhost" Oct 29 00:42:43.727802 containerd[1632]: 2025-10-29 00:42:43.706 [INFO][4311] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" host="localhost" Oct 29 00:42:43.727802 containerd[1632]: 2025-10-29 00:42:43.706 [INFO][4311] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" host="localhost" Oct 29 00:42:43.727802 containerd[1632]: 2025-10-29 00:42:43.706 [INFO][4311] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:43.727802 containerd[1632]: 2025-10-29 00:42:43.706 [INFO][4311] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" HandleID="k8s-pod-network.0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" Workload="localhost-k8s-goldmane--666569f655--jvtk4-eth0" Oct 29 00:42:43.727914 containerd[1632]: 2025-10-29 00:42:43.709 [INFO][4287] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" Namespace="calico-system" Pod="goldmane-666569f655-jvtk4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jvtk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--jvtk4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"94b96309-8719-4f92-83c6-e3ea53662334", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-jvtk4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliec18eb4de84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:43.727914 containerd[1632]: 2025-10-29 00:42:43.710 [INFO][4287] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" Namespace="calico-system" Pod="goldmane-666569f655-jvtk4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jvtk4-eth0" Oct 29 00:42:43.728063 containerd[1632]: 2025-10-29 00:42:43.710 [INFO][4287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec18eb4de84 ContainerID="0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" Namespace="calico-system" Pod="goldmane-666569f655-jvtk4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jvtk4-eth0" Oct 29 00:42:43.728063 containerd[1632]: 2025-10-29 00:42:43.713 [INFO][4287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" Namespace="calico-system" Pod="goldmane-666569f655-jvtk4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jvtk4-eth0" Oct 29 00:42:43.728111 containerd[1632]: 2025-10-29 00:42:43.713 [INFO][4287] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" Namespace="calico-system" Pod="goldmane-666569f655-jvtk4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jvtk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--jvtk4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"94b96309-8719-4f92-83c6-e3ea53662334", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d", Pod:"goldmane-666569f655-jvtk4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliec18eb4de84", MAC:"32:f6:90:15:3c:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:43.728170 containerd[1632]: 2025-10-29 00:42:43.723 [INFO][4287] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" Namespace="calico-system" Pod="goldmane-666569f655-jvtk4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jvtk4-eth0" Oct 29 00:42:43.734707 containerd[1632]: time="2025-10-29T00:42:43.734669892Z" level=info msg="connecting to shim 56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb" address="unix:///run/containerd/s/ac637d48723ec0a832b1612b8ff0057832bc1367ef2d0963c48378409929297a" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:43.755395 containerd[1632]: time="2025-10-29T00:42:43.754765993Z" level=info msg="connecting to shim 0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d" address="unix:///run/containerd/s/16ba9f2a926f23c1a76aa8879afaa9420729e1c1f1400e5a1cff386c13e8db53" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:43.764175 systemd[1]: Started cri-containerd-56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb.scope - libcontainer container 56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb. Oct 29 00:42:43.783159 systemd[1]: Started cri-containerd-0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d.scope - libcontainer container 0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d. Oct 29 00:42:43.787516 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:43.797957 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:43.823912 containerd[1632]: time="2025-10-29T00:42:43.823868825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6555bc8b57-6t6f2,Uid:e6e3d24d-0964-48c5-ab21-4abb2f93d132,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"56a2221ba786e1f2ede6391002d170d48e07965bb46581a007cb3c1d7117d5bb\"" Oct 29 00:42:43.834960 containerd[1632]: time="2025-10-29T00:42:43.834915462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jvtk4,Uid:94b96309-8719-4f92-83c6-e3ea53662334,Namespace:calico-system,Attempt:0,} returns sandbox id \"0624a2d0641cd7257f8b8d4192f46031071404566627fa5dded12ca3fe310e6d\"" Oct 29 00:42:43.880616 containerd[1632]: time="2025-10-29T00:42:43.880507762Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:43.882232 containerd[1632]: time="2025-10-29T00:42:43.882155756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:42:43.882434 containerd[1632]: time="2025-10-29T00:42:43.882249993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:42:43.882533 kubelet[2796]: E1029 00:42:43.882478 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:42:43.882852 kubelet[2796]: E1029 00:42:43.882554 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:42:43.883102 containerd[1632]: time="2025-10-29T00:42:43.883077365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:42:43.889346 kubelet[2796]: E1029 00:42:43.889277 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1760c0aab9cf4aae903ec89f085f66b1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nxwlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76ccf55dd8-7n2c9_calico-system(ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:44.237410 containerd[1632]: time="2025-10-29T00:42:44.237249963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:44.238706 containerd[1632]: time="2025-10-29T00:42:44.238556385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:42:44.238706 containerd[1632]: time="2025-10-29T00:42:44.238598866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:44.238912 kubelet[2796]: E1029 00:42:44.238859 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:44.238968 kubelet[2796]: E1029 00:42:44.238914 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:44.239322 kubelet[2796]: E1029 00:42:44.239211 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csmsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6555bc8b57-6t6f2_calico-apiserver(e6e3d24d-0964-48c5-ab21-4abb2f93d132): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:44.239609 containerd[1632]: time="2025-10-29T00:42:44.239560711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:42:44.241434 kubelet[2796]: E1029 00:42:44.241072 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6555bc8b57-6t6f2" podUID="e6e3d24d-0964-48c5-ab21-4abb2f93d132" Oct 29 00:42:44.448568 containerd[1632]: time="2025-10-29T00:42:44.448497962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579cf9b788-p778f,Uid:0d9ba357-e9fe-4334-aa42-2c44f212b5ae,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:42:44.565265 containerd[1632]: time="2025-10-29T00:42:44.565114679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:44.566254 containerd[1632]: time="2025-10-29T00:42:44.566215676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:42:44.566332 containerd[1632]: time="2025-10-29T00:42:44.566288874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:44.567227 kubelet[2796]: E1029 00:42:44.567159 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:42:44.567291 kubelet[2796]: E1029 00:42:44.567227 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:42:44.567667 kubelet[2796]: E1029 00:42:44.567583 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xghbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jvtk4_calico-system(94b96309-8719-4f92-83c6-e3ea53662334): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:44.567875 containerd[1632]: time="2025-10-29T00:42:44.567779651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:42:44.569018 kubelet[2796]: E1029 00:42:44.568969 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jvtk4" podUID="94b96309-8719-4f92-83c6-e3ea53662334" Oct 29 00:42:44.570897 systemd-networkd[1540]: cali05b78fad6f6: Gained IPv6LL Oct 29 00:42:44.601898 kubelet[2796]: E1029 00:42:44.601828 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6555bc8b57-6t6f2" podUID="e6e3d24d-0964-48c5-ab21-4abb2f93d132" Oct 29 00:42:44.602928 kubelet[2796]: E1029 00:42:44.602901 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jvtk4" podUID="94b96309-8719-4f92-83c6-e3ea53662334" Oct 29 00:42:44.673656 systemd-networkd[1540]: cali3541908e468: Link UP Oct 29 00:42:44.674588 systemd-networkd[1540]: cali3541908e468: Gained carrier Oct 29 00:42:44.688017 containerd[1632]: 2025-10-29 00:42:44.572 [INFO][4451] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:42:44.688017 containerd[1632]: 2025-10-29 00:42:44.589 [INFO][4451] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--579cf9b788--p778f-eth0 calico-apiserver-579cf9b788- calico-apiserver 0d9ba357-e9fe-4334-aa42-2c44f212b5ae 864 0 2025-10-29 00:42:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:579cf9b788 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-579cf9b788-p778f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3541908e468 [] [] }} ContainerID="c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-p778f" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--p778f-" Oct 29 00:42:44.688017 containerd[1632]: 2025-10-29 00:42:44.589 [INFO][4451] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-p778f" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--p778f-eth0" Oct 29 00:42:44.688017 containerd[1632]: 2025-10-29 00:42:44.632 [INFO][4466] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" HandleID="k8s-pod-network.c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" Workload="localhost-k8s-calico--apiserver--579cf9b788--p778f-eth0" Oct 29 00:42:44.688272 containerd[1632]: 2025-10-29 00:42:44.633 [INFO][4466] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" HandleID="k8s-pod-network.c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" Workload="localhost-k8s-calico--apiserver--579cf9b788--p778f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b06c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-579cf9b788-p778f", "timestamp":"2025-10-29 00:42:44.632772752 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:44.688272 containerd[1632]: 2025-10-29 00:42:44.633 [INFO][4466] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:44.688272 containerd[1632]: 2025-10-29 00:42:44.633 [INFO][4466] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:44.688272 containerd[1632]: 2025-10-29 00:42:44.633 [INFO][4466] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:44.688272 containerd[1632]: 2025-10-29 00:42:44.642 [INFO][4466] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" host="localhost" Oct 29 00:42:44.688272 containerd[1632]: 2025-10-29 00:42:44.647 [INFO][4466] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:44.688272 containerd[1632]: 2025-10-29 00:42:44.650 [INFO][4466] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:44.688272 containerd[1632]: 2025-10-29 00:42:44.652 [INFO][4466] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:44.688272 containerd[1632]: 2025-10-29 00:42:44.654 [INFO][4466] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:44.688272 containerd[1632]: 2025-10-29 00:42:44.654 [INFO][4466] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" host="localhost" Oct 29 00:42:44.688483 containerd[1632]: 2025-10-29 00:42:44.655 [INFO][4466] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2 Oct 29 00:42:44.688483 containerd[1632]: 2025-10-29 00:42:44.663 [INFO][4466] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" host="localhost" Oct 29 00:42:44.688483 containerd[1632]: 2025-10-29 00:42:44.667 [INFO][4466] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" host="localhost" Oct 29 00:42:44.688483 containerd[1632]: 2025-10-29 00:42:44.667 [INFO][4466] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" host="localhost" Oct 29 00:42:44.688483 containerd[1632]: 2025-10-29 00:42:44.668 [INFO][4466] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:44.688483 containerd[1632]: 2025-10-29 00:42:44.668 [INFO][4466] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" HandleID="k8s-pod-network.c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" Workload="localhost-k8s-calico--apiserver--579cf9b788--p778f-eth0" Oct 29 00:42:44.688604 containerd[1632]: 2025-10-29 00:42:44.671 [INFO][4451] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-p778f" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--p778f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579cf9b788--p778f-eth0", GenerateName:"calico-apiserver-579cf9b788-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d9ba357-e9fe-4334-aa42-2c44f212b5ae", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579cf9b788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-579cf9b788-p778f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3541908e468", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:44.688654 containerd[1632]: 2025-10-29 00:42:44.671 [INFO][4451] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-p778f" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--p778f-eth0" Oct 29 00:42:44.688654 containerd[1632]: 2025-10-29 00:42:44.671 [INFO][4451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3541908e468 ContainerID="c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-p778f" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--p778f-eth0" Oct 29 00:42:44.688654 containerd[1632]: 2025-10-29 00:42:44.675 [INFO][4451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-p778f" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--p778f-eth0" Oct 29 00:42:44.688716 containerd[1632]: 2025-10-29 00:42:44.675 [INFO][4451] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-p778f" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--p778f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579cf9b788--p778f-eth0", GenerateName:"calico-apiserver-579cf9b788-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d9ba357-e9fe-4334-aa42-2c44f212b5ae", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579cf9b788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2", Pod:"calico-apiserver-579cf9b788-p778f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3541908e468", MAC:"12:28:3d:64:f9:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:44.688831 containerd[1632]: 2025-10-29 00:42:44.683 [INFO][4451] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-p778f" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--p778f-eth0" Oct 29 00:42:44.887587 containerd[1632]: time="2025-10-29T00:42:44.887521051Z" level=info msg="connecting to shim c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2" address="unix:///run/containerd/s/096eb4e4ae47fd81a8b3248b3b5e5a2e47d4b36c2a5e8e7463459480ad1a43b7" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:44.900663 containerd[1632]: time="2025-10-29T00:42:44.900623884Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:44.913163 systemd[1]: Started cri-containerd-c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2.scope - libcontainer container c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2. Oct 29 00:42:44.914409 containerd[1632]: time="2025-10-29T00:42:44.914203643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:42:44.914596 containerd[1632]: time="2025-10-29T00:42:44.914274606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:42:44.914910 kubelet[2796]: E1029 00:42:44.914856 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:42:44.915552 kubelet[2796]: E1029 00:42:44.914925 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:42:44.915552 kubelet[2796]: E1029 00:42:44.915069 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxwlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76ccf55dd8-7n2c9_calico-system(ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:44.916373 kubelet[2796]: E1029 00:42:44.916216 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76ccf55dd8-7n2c9" podUID="ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0" Oct 29 00:42:44.930504 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:44.965389 containerd[1632]: time="2025-10-29T00:42:44.965310690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579cf9b788-p778f,Uid:0d9ba357-e9fe-4334-aa42-2c44f212b5ae,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c182c79ee67fe88aec018a5b3fe16ce0751a6c80d53cc3e8dcca12ef7f2aa3a2\"" Oct 29 00:42:44.966878 containerd[1632]: time="2025-10-29T00:42:44.966853967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:42:45.018171 systemd-networkd[1540]: caliae43be3c084: Gained IPv6LL Oct 29 00:42:45.341889 containerd[1632]: time="2025-10-29T00:42:45.341836837Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:45.343353 containerd[1632]: time="2025-10-29T00:42:45.343293030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:42:45.343421 containerd[1632]: time="2025-10-29T00:42:45.343367099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:45.343566 kubelet[2796]: E1029 00:42:45.343516 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:45.343618 kubelet[2796]: E1029 00:42:45.343570 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:45.343760 kubelet[2796]: E1029 00:42:45.343717 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrp5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-579cf9b788-p778f_calico-apiserver(0d9ba357-e9fe-4334-aa42-2c44f212b5ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:45.344948 kubelet[2796]: E1029 00:42:45.344909 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579cf9b788-p778f" podUID="0d9ba357-e9fe-4334-aa42-2c44f212b5ae" Oct 29 00:42:45.448754 kubelet[2796]: E1029 00:42:45.448629 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:45.448986 containerd[1632]: time="2025-10-29T00:42:45.448759669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfhx9,Uid:06790988-73f1-4592-ba5d-833c8bb13f59,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:45.449163 containerd[1632]: time="2025-10-29T00:42:45.449125284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n9tnk,Uid:d50e446a-d926-4232-912c-aaf27bd789fe,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:45.449402 containerd[1632]: time="2025-10-29T00:42:45.449367319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf97f5b86-fqx7t,Uid:63def325-7646-4955-b342-50757e8ccbe9,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:45.467032 systemd-networkd[1540]: caliec18eb4de84: Gained IPv6LL Oct 29 00:42:45.601368 systemd-networkd[1540]: calie57163ebcd6: Link UP Oct 29 00:42:45.602286 systemd-networkd[1540]: calie57163ebcd6: Gained carrier Oct 29 00:42:45.613075 kubelet[2796]: E1029 00:42:45.612964 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6555bc8b57-6t6f2" podUID="e6e3d24d-0964-48c5-ab21-4abb2f93d132" Oct 29 00:42:45.614191 kubelet[2796]: E1029 00:42:45.614116 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jvtk4" podUID="94b96309-8719-4f92-83c6-e3ea53662334" Oct 29 00:42:45.614977 kubelet[2796]: E1029 00:42:45.614948 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579cf9b788-p778f" podUID="0d9ba357-e9fe-4334-aa42-2c44f212b5ae" Oct 29 00:42:45.615501 kubelet[2796]: E1029 00:42:45.615354 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76ccf55dd8-7n2c9" podUID="ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0" Oct 29 00:42:45.620283 containerd[1632]: 2025-10-29 00:42:45.492 [INFO][4555] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:42:45.620283 containerd[1632]: 2025-10-29 00:42:45.504 [INFO][4555] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dfhx9-eth0 csi-node-driver- calico-system 06790988-73f1-4592-ba5d-833c8bb13f59 758 0 2025-10-29 00:42:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dfhx9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie57163ebcd6 [] [] }} ContainerID="df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" Namespace="calico-system" Pod="csi-node-driver-dfhx9" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfhx9-" Oct 29 00:42:45.620283 containerd[1632]: 2025-10-29 00:42:45.504 [INFO][4555] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" Namespace="calico-system" Pod="csi-node-driver-dfhx9" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfhx9-eth0" Oct 29 00:42:45.620283 containerd[1632]: 2025-10-29 00:42:45.543 [INFO][4599] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" HandleID="k8s-pod-network.df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" Workload="localhost-k8s-csi--node--driver--dfhx9-eth0" Oct 29 00:42:45.621074 containerd[1632]: 2025-10-29 00:42:45.543 [INFO][4599] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" HandleID="k8s-pod-network.df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" Workload="localhost-k8s-csi--node--driver--dfhx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df000), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dfhx9", "timestamp":"2025-10-29 00:42:45.543320458 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:45.621074 containerd[1632]: 2025-10-29 00:42:45.544 [INFO][4599] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:45.621074 containerd[1632]: 2025-10-29 00:42:45.544 [INFO][4599] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:45.621074 containerd[1632]: 2025-10-29 00:42:45.544 [INFO][4599] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:45.621074 containerd[1632]: 2025-10-29 00:42:45.554 [INFO][4599] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" host="localhost" Oct 29 00:42:45.621074 containerd[1632]: 2025-10-29 00:42:45.559 [INFO][4599] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:45.621074 containerd[1632]: 2025-10-29 00:42:45.563 [INFO][4599] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:45.621074 containerd[1632]: 2025-10-29 00:42:45.565 [INFO][4599] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:45.621074 containerd[1632]: 2025-10-29 00:42:45.568 [INFO][4599] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:45.621074 containerd[1632]: 2025-10-29 00:42:45.568 [INFO][4599] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" host="localhost" Oct 29 00:42:45.621315 containerd[1632]: 2025-10-29 00:42:45.569 [INFO][4599] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0 Oct 29 00:42:45.621315 containerd[1632]: 2025-10-29 00:42:45.586 [INFO][4599] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" host="localhost" Oct 29 00:42:45.621315 containerd[1632]: 2025-10-29 00:42:45.592 [INFO][4599] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" host="localhost" Oct 29 00:42:45.621315 containerd[1632]: 2025-10-29 00:42:45.592 [INFO][4599] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" host="localhost" Oct 29 00:42:45.621315 containerd[1632]: 2025-10-29 00:42:45.592 [INFO][4599] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:45.621315 containerd[1632]: 2025-10-29 00:42:45.592 [INFO][4599] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" HandleID="k8s-pod-network.df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" Workload="localhost-k8s-csi--node--driver--dfhx9-eth0" Oct 29 00:42:45.621460 containerd[1632]: 2025-10-29 00:42:45.597 [INFO][4555] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" Namespace="calico-system" Pod="csi-node-driver-dfhx9" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfhx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dfhx9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06790988-73f1-4592-ba5d-833c8bb13f59", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dfhx9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie57163ebcd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:45.621511 containerd[1632]: 2025-10-29 00:42:45.597 [INFO][4555] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" Namespace="calico-system" Pod="csi-node-driver-dfhx9" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfhx9-eth0" Oct 29 00:42:45.621511 containerd[1632]: 2025-10-29 00:42:45.597 [INFO][4555] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie57163ebcd6 ContainerID="df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" Namespace="calico-system" Pod="csi-node-driver-dfhx9" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfhx9-eth0" Oct 29 00:42:45.621511 containerd[1632]: 2025-10-29 00:42:45.602 [INFO][4555] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" Namespace="calico-system" Pod="csi-node-driver-dfhx9" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfhx9-eth0" Oct 29 00:42:45.621571 containerd[1632]: 2025-10-29 00:42:45.602 [INFO][4555] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" Namespace="calico-system" Pod="csi-node-driver-dfhx9" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfhx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dfhx9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06790988-73f1-4592-ba5d-833c8bb13f59", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0", Pod:"csi-node-driver-dfhx9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie57163ebcd6", MAC:"16:07:f3:84:a1:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:45.621618 containerd[1632]: 2025-10-29 00:42:45.616 [INFO][4555] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" Namespace="calico-system" Pod="csi-node-driver-dfhx9" WorkloadEndpoint="localhost-k8s-csi--node--driver--dfhx9-eth0" Oct 29 00:42:45.650453 containerd[1632]: time="2025-10-29T00:42:45.650408669Z" level=info msg="connecting to shim df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0" address="unix:///run/containerd/s/e32ba2a16a458abef19633c8110af60cd7fe4b1950db96752d5878832fa5691d" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:45.658309 kubelet[2796]: I1029 00:42:45.658267 2796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:42:45.658671 kubelet[2796]: E1029 00:42:45.658649 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:45.699397 systemd[1]: Started cri-containerd-df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0.scope - libcontainer container df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0. Oct 29 00:42:45.709119 systemd-networkd[1540]: calia1bfd478811: Link UP Oct 29 00:42:45.710192 systemd-networkd[1540]: calia1bfd478811: Gained carrier Oct 29 00:42:45.719636 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:45.723413 containerd[1632]: 2025-10-29 00:42:45.494 [INFO][4562] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:42:45.723413 containerd[1632]: 2025-10-29 00:42:45.509 [INFO][4562] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--cf97f5b86--fqx7t-eth0 calico-kube-controllers-cf97f5b86- calico-system 63def325-7646-4955-b342-50757e8ccbe9 869 0 2025-10-29 00:42:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cf97f5b86 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-cf97f5b86-fqx7t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia1bfd478811 [] [] }} ContainerID="1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" Namespace="calico-system" Pod="calico-kube-controllers-cf97f5b86-fqx7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf97f5b86--fqx7t-" Oct 29 00:42:45.723413 containerd[1632]: 2025-10-29 00:42:45.509 [INFO][4562] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" Namespace="calico-system" Pod="calico-kube-controllers-cf97f5b86-fqx7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf97f5b86--fqx7t-eth0" Oct 29 00:42:45.723413 containerd[1632]: 2025-10-29 00:42:45.544 [INFO][4603] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" HandleID="k8s-pod-network.1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" Workload="localhost-k8s-calico--kube--controllers--cf97f5b86--fqx7t-eth0" Oct 29 00:42:45.723620 containerd[1632]: 2025-10-29 00:42:45.544 [INFO][4603] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" HandleID="k8s-pod-network.1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" Workload="localhost-k8s-calico--kube--controllers--cf97f5b86--fqx7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000434120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-cf97f5b86-fqx7t", "timestamp":"2025-10-29 00:42:45.544195119 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:45.723620 containerd[1632]: 2025-10-29 00:42:45.544 [INFO][4603] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:45.723620 containerd[1632]: 2025-10-29 00:42:45.592 [INFO][4603] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:45.723620 containerd[1632]: 2025-10-29 00:42:45.592 [INFO][4603] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:45.723620 containerd[1632]: 2025-10-29 00:42:45.653 [INFO][4603] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" host="localhost" Oct 29 00:42:45.723620 containerd[1632]: 2025-10-29 00:42:45.664 [INFO][4603] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:45.723620 containerd[1632]: 2025-10-29 00:42:45.677 [INFO][4603] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:45.723620 containerd[1632]: 2025-10-29 00:42:45.681 [INFO][4603] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:45.723620 containerd[1632]: 2025-10-29 00:42:45.686 [INFO][4603] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:45.723620 containerd[1632]: 2025-10-29 00:42:45.686 [INFO][4603] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" host="localhost" Oct 29 00:42:45.723910 containerd[1632]: 2025-10-29 00:42:45.690 [INFO][4603] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc Oct 29 00:42:45.723910 containerd[1632]: 2025-10-29 00:42:45.694 [INFO][4603] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" host="localhost" Oct 29 00:42:45.723910 containerd[1632]: 2025-10-29 00:42:45.699 [INFO][4603] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" host="localhost" Oct 29 00:42:45.723910 containerd[1632]: 2025-10-29 00:42:45.699 [INFO][4603] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" host="localhost" Oct 29 00:42:45.723910 containerd[1632]: 2025-10-29 00:42:45.699 [INFO][4603] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:45.723910 containerd[1632]: 2025-10-29 00:42:45.699 [INFO][4603] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" HandleID="k8s-pod-network.1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" Workload="localhost-k8s-calico--kube--controllers--cf97f5b86--fqx7t-eth0" Oct 29 00:42:45.725815 containerd[1632]: 2025-10-29 00:42:45.706 [INFO][4562] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" Namespace="calico-system" Pod="calico-kube-controllers-cf97f5b86-fqx7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf97f5b86--fqx7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cf97f5b86--fqx7t-eth0", GenerateName:"calico-kube-controllers-cf97f5b86-", Namespace:"calico-system", SelfLink:"", UID:"63def325-7646-4955-b342-50757e8ccbe9", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cf97f5b86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-cf97f5b86-fqx7t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1bfd478811", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:45.725882 containerd[1632]: 2025-10-29 00:42:45.707 [INFO][4562] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" Namespace="calico-system" Pod="calico-kube-controllers-cf97f5b86-fqx7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf97f5b86--fqx7t-eth0" Oct 29 00:42:45.725882 containerd[1632]: 2025-10-29 00:42:45.707 [INFO][4562] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1bfd478811 ContainerID="1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" Namespace="calico-system" Pod="calico-kube-controllers-cf97f5b86-fqx7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf97f5b86--fqx7t-eth0" Oct 29 00:42:45.725882 containerd[1632]: 2025-10-29 00:42:45.710 [INFO][4562] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" Namespace="calico-system" Pod="calico-kube-controllers-cf97f5b86-fqx7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf97f5b86--fqx7t-eth0" Oct 29 00:42:45.725943 containerd[1632]: 2025-10-29 00:42:45.710 [INFO][4562] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" Namespace="calico-system" Pod="calico-kube-controllers-cf97f5b86-fqx7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf97f5b86--fqx7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cf97f5b86--fqx7t-eth0", GenerateName:"calico-kube-controllers-cf97f5b86-", Namespace:"calico-system", SelfLink:"", UID:"63def325-7646-4955-b342-50757e8ccbe9", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cf97f5b86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc", Pod:"calico-kube-controllers-cf97f5b86-fqx7t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1bfd478811", MAC:"ca:f9:89:9f:11:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:45.726005 containerd[1632]: 2025-10-29 00:42:45.720 [INFO][4562] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" Namespace="calico-system" Pod="calico-kube-controllers-cf97f5b86-fqx7t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf97f5b86--fqx7t-eth0" Oct 29 00:42:45.737199 containerd[1632]: time="2025-10-29T00:42:45.737158759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfhx9,Uid:06790988-73f1-4592-ba5d-833c8bb13f59,Namespace:calico-system,Attempt:0,} returns sandbox id \"df98c5cb3470a5e6992075cdf037ac8aed653e4638f725c02b4650dfba7cd2e0\"" Oct 29 00:42:45.738853 containerd[1632]: time="2025-10-29T00:42:45.738809296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 00:42:45.748896 containerd[1632]: time="2025-10-29T00:42:45.748820807Z" level=info msg="connecting to shim 1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc" address="unix:///run/containerd/s/d2cbe0cafe4c94ed8fe527a728255be51c94520a8673fcb7e65d7452b93ffd26" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:45.779232 systemd[1]: Started cri-containerd-1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc.scope - libcontainer container 1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc. Oct 29 00:42:45.802860 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:45.805587 systemd-networkd[1540]: cali52693900ad1: Link UP Oct 29 00:42:45.806119 systemd-networkd[1540]: cali52693900ad1: Gained carrier Oct 29 00:42:45.821251 containerd[1632]: 2025-10-29 00:42:45.509 [INFO][4572] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:42:45.821251 containerd[1632]: 2025-10-29 00:42:45.529 [INFO][4572] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--n9tnk-eth0 coredns-674b8bbfcf- kube-system d50e446a-d926-4232-912c-aaf27bd789fe 873 0 2025-10-29 00:42:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-n9tnk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali52693900ad1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9tnk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n9tnk-" Oct 29 00:42:45.821251 containerd[1632]: 2025-10-29 00:42:45.529 [INFO][4572] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9tnk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n9tnk-eth0" Oct 29 00:42:45.821251 containerd[1632]: 2025-10-29 00:42:45.568 [INFO][4613] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" HandleID="k8s-pod-network.8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" Workload="localhost-k8s-coredns--674b8bbfcf--n9tnk-eth0" Oct 29 00:42:45.821492 containerd[1632]: 2025-10-29 00:42:45.568 [INFO][4613] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" HandleID="k8s-pod-network.8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" Workload="localhost-k8s-coredns--674b8bbfcf--n9tnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034cfa0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-n9tnk", "timestamp":"2025-10-29 00:42:45.568456485 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:45.821492 containerd[1632]: 2025-10-29 00:42:45.568 [INFO][4613] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:45.821492 containerd[1632]: 2025-10-29 00:42:45.699 [INFO][4613] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:45.821492 containerd[1632]: 2025-10-29 00:42:45.700 [INFO][4613] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:45.821492 containerd[1632]: 2025-10-29 00:42:45.752 [INFO][4613] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" host="localhost" Oct 29 00:42:45.821492 containerd[1632]: 2025-10-29 00:42:45.768 [INFO][4613] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:45.821492 containerd[1632]: 2025-10-29 00:42:45.774 [INFO][4613] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:45.821492 containerd[1632]: 2025-10-29 00:42:45.776 [INFO][4613] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:45.821492 containerd[1632]: 2025-10-29 00:42:45.778 [INFO][4613] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:45.821492 containerd[1632]: 2025-10-29 00:42:45.778 [INFO][4613] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" host="localhost" Oct 29 00:42:45.821682 containerd[1632]: 2025-10-29 00:42:45.780 [INFO][4613] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb Oct 29 00:42:45.821682 containerd[1632]: 2025-10-29 00:42:45.788 [INFO][4613] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" host="localhost" Oct 29 00:42:45.821682 containerd[1632]: 2025-10-29 00:42:45.799 [INFO][4613] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" host="localhost" Oct 29 00:42:45.821682 containerd[1632]: 2025-10-29 00:42:45.799 [INFO][4613] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" host="localhost" Oct 29 00:42:45.821682 containerd[1632]: 2025-10-29 00:42:45.799 [INFO][4613] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:45.821682 containerd[1632]: 2025-10-29 00:42:45.799 [INFO][4613] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" HandleID="k8s-pod-network.8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" Workload="localhost-k8s-coredns--674b8bbfcf--n9tnk-eth0" Oct 29 00:42:45.821787 containerd[1632]: 2025-10-29 00:42:45.802 [INFO][4572] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9tnk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n9tnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--n9tnk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d50e446a-d926-4232-912c-aaf27bd789fe", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-n9tnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52693900ad1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:45.821848 containerd[1632]: 2025-10-29 00:42:45.803 [INFO][4572] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9tnk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n9tnk-eth0" Oct 29 00:42:45.821848 containerd[1632]: 2025-10-29 00:42:45.803 [INFO][4572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52693900ad1 ContainerID="8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9tnk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n9tnk-eth0" Oct 29 00:42:45.821848 containerd[1632]: 2025-10-29 00:42:45.806 [INFO][4572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9tnk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n9tnk-eth0" Oct 29 00:42:45.821908 containerd[1632]: 2025-10-29 00:42:45.806 [INFO][4572] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9tnk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n9tnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--n9tnk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d50e446a-d926-4232-912c-aaf27bd789fe", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb", Pod:"coredns-674b8bbfcf-n9tnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52693900ad1", MAC:"f2:5f:2d:4a:98:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:45.821908 containerd[1632]: 2025-10-29 00:42:45.816 [INFO][4572] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-n9tnk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n9tnk-eth0" Oct 29 00:42:45.838180 containerd[1632]: time="2025-10-29T00:42:45.838024262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf97f5b86-fqx7t,Uid:63def325-7646-4955-b342-50757e8ccbe9,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f3eb5de0230c5f562b83a1416c56d1796512dd67e3547ef863c9e2b7ebed2cc\"" Oct 29 00:42:45.845709 containerd[1632]: time="2025-10-29T00:42:45.845657317Z" level=info msg="connecting to shim 8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb" address="unix:///run/containerd/s/6abc3bda4818de91c7fb29b3a5e3d1c8b2ad9df59b09e19a27ed39e062043dab" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:45.873161 systemd[1]: Started cri-containerd-8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb.scope - libcontainer container 8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb. Oct 29 00:42:45.901283 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:45.913166 systemd-networkd[1540]: cali3541908e468: Gained IPv6LL Oct 29 00:42:45.942415 containerd[1632]: time="2025-10-29T00:42:45.942283464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n9tnk,Uid:d50e446a-d926-4232-912c-aaf27bd789fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb\"" Oct 29 00:42:45.944265 kubelet[2796]: E1029 00:42:45.943927 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:45.949044 containerd[1632]: time="2025-10-29T00:42:45.948975744Z" level=info msg="CreateContainer within sandbox \"8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 00:42:45.961573 containerd[1632]: time="2025-10-29T00:42:45.961043643Z" level=info msg="Container 7fb726333ff0b9bb52ee581528a1aefdc3289d0c01ee55a32aaddac3f991ea58: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:42:45.965555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount497494739.mount: Deactivated successfully. Oct 29 00:42:45.968896 containerd[1632]: time="2025-10-29T00:42:45.968861056Z" level=info msg="CreateContainer within sandbox \"8aea3d95099b0d4bde9e27d7b14dd1cb33580508c86baadd27143baffcad10eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7fb726333ff0b9bb52ee581528a1aefdc3289d0c01ee55a32aaddac3f991ea58\"" Oct 29 00:42:45.969498 containerd[1632]: time="2025-10-29T00:42:45.969461162Z" level=info msg="StartContainer for \"7fb726333ff0b9bb52ee581528a1aefdc3289d0c01ee55a32aaddac3f991ea58\"" Oct 29 00:42:45.970280 containerd[1632]: time="2025-10-29T00:42:45.970255984Z" level=info msg="connecting to shim 7fb726333ff0b9bb52ee581528a1aefdc3289d0c01ee55a32aaddac3f991ea58" address="unix:///run/containerd/s/6abc3bda4818de91c7fb29b3a5e3d1c8b2ad9df59b09e19a27ed39e062043dab" protocol=ttrpc version=3 Oct 29 00:42:45.998247 systemd[1]: Started cri-containerd-7fb726333ff0b9bb52ee581528a1aefdc3289d0c01ee55a32aaddac3f991ea58.scope - libcontainer container 7fb726333ff0b9bb52ee581528a1aefdc3289d0c01ee55a32aaddac3f991ea58. Oct 29 00:42:46.037199 containerd[1632]: time="2025-10-29T00:42:46.037144212Z" level=info msg="StartContainer for \"7fb726333ff0b9bb52ee581528a1aefdc3289d0c01ee55a32aaddac3f991ea58\" returns successfully" Oct 29 00:42:46.121388 containerd[1632]: time="2025-10-29T00:42:46.121340690Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:46.123934 containerd[1632]: time="2025-10-29T00:42:46.123891117Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 00:42:46.124162 containerd[1632]: time="2025-10-29T00:42:46.124042541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 00:42:46.124609 kubelet[2796]: E1029 00:42:46.124519 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:42:46.124609 kubelet[2796]: E1029 00:42:46.124590 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:42:46.126107 containerd[1632]: time="2025-10-29T00:42:46.125595856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:42:46.126152 kubelet[2796]: E1029 00:42:46.125628 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdnkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dfhx9_calico-system(06790988-73f1-4592-ba5d-833c8bb13f59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:46.432384 systemd[1]: Started sshd@8-10.0.0.77:22-10.0.0.1:54224.service - OpenSSH per-connection server daemon (10.0.0.1:54224). Oct 29 00:42:46.450188 containerd[1632]: time="2025-10-29T00:42:46.450136599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579cf9b788-8b2jb,Uid:67dad18a-63e2-479c-bc13-d9830637f19e,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:42:46.510521 containerd[1632]: time="2025-10-29T00:42:46.510467889Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:46.513615 containerd[1632]: time="2025-10-29T00:42:46.513137027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:42:46.514893 containerd[1632]: time="2025-10-29T00:42:46.513346381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:42:46.515333 kubelet[2796]: E1029 00:42:46.515269 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:42:46.515430 kubelet[2796]: E1029 00:42:46.515338 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:42:46.515691 systemd-networkd[1540]: vxlan.calico: Link UP Oct 29 00:42:46.515701 systemd-networkd[1540]: vxlan.calico: Gained carrier Oct 29 00:42:46.518055 containerd[1632]: time="2025-10-29T00:42:46.517957494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 00:42:46.519667 kubelet[2796]: E1029 00:42:46.519566 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfgbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cf97f5b86-fqx7t_calico-system(63def325-7646-4955-b342-50757e8ccbe9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:46.522075 kubelet[2796]: E1029 00:42:46.521224 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cf97f5b86-fqx7t" podUID="63def325-7646-4955-b342-50757e8ccbe9" Oct 29 00:42:46.525084 sshd[4894]: Accepted publickey for core from 10.0.0.1 port 54224 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:46.527321 sshd-session[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:46.534448 systemd-logind[1616]: New session 9 of user core. Oct 29 00:42:46.541838 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 29 00:42:46.636352 kubelet[2796]: E1029 00:42:46.634496 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:46.643861 kubelet[2796]: E1029 00:42:46.643383 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cf97f5b86-fqx7t" podUID="63def325-7646-4955-b342-50757e8ccbe9" Oct 29 00:42:46.645262 kubelet[2796]: E1029 00:42:46.645224 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:46.648007 kubelet[2796]: E1029 00:42:46.647949 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579cf9b788-p778f" podUID="0d9ba357-e9fe-4334-aa42-2c44f212b5ae" Oct 29 00:42:46.654679 kubelet[2796]: I1029 00:42:46.654592 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n9tnk" podStartSLOduration=46.654566 podStartE2EDuration="46.654566s" podCreationTimestamp="2025-10-29 00:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:42:46.652084843 +0000 UTC m=+53.306532814" watchObservedRunningTime="2025-10-29 00:42:46.654566 +0000 UTC m=+53.309013961" Oct 29 00:42:46.660565 systemd-networkd[1540]: calib7728e48852: Link UP Oct 29 00:42:46.661243 systemd-networkd[1540]: calib7728e48852: Gained carrier Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.533 [INFO][4904] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--579cf9b788--8b2jb-eth0 calico-apiserver-579cf9b788- calico-apiserver 67dad18a-63e2-479c-bc13-d9830637f19e 870 0 2025-10-29 00:42:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:579cf9b788 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-579cf9b788-8b2jb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib7728e48852 [] [] }} ContainerID="4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-8b2jb" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--8b2jb-" Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.533 [INFO][4904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-8b2jb" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--8b2jb-eth0" Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.577 [INFO][4924] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" HandleID="k8s-pod-network.4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" Workload="localhost-k8s-calico--apiserver--579cf9b788--8b2jb-eth0" Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.577 [INFO][4924] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" HandleID="k8s-pod-network.4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" Workload="localhost-k8s-calico--apiserver--579cf9b788--8b2jb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e6ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-579cf9b788-8b2jb", "timestamp":"2025-10-29 00:42:46.577575935 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.577 [INFO][4924] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.577 [INFO][4924] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.577 [INFO][4924] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.601 [INFO][4924] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" host="localhost" Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.608 [INFO][4924] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.614 [INFO][4924] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.616 [INFO][4924] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.620 [INFO][4924] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.620 [INFO][4924] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" host="localhost" Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.623 [INFO][4924] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37 Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.627 [INFO][4924] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" host="localhost" Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.639 [INFO][4924] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" host="localhost" Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.639 [INFO][4924] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" host="localhost" Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.639 [INFO][4924] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:46.689099 containerd[1632]: 2025-10-29 00:42:46.640 [INFO][4924] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" HandleID="k8s-pod-network.4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" Workload="localhost-k8s-calico--apiserver--579cf9b788--8b2jb-eth0" Oct 29 00:42:46.689705 containerd[1632]: 2025-10-29 00:42:46.653 [INFO][4904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-8b2jb" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--8b2jb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579cf9b788--8b2jb-eth0", GenerateName:"calico-apiserver-579cf9b788-", Namespace:"calico-apiserver", SelfLink:"", UID:"67dad18a-63e2-479c-bc13-d9830637f19e", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579cf9b788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-579cf9b788-8b2jb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib7728e48852", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:46.689705 containerd[1632]: 2025-10-29 00:42:46.654 [INFO][4904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-8b2jb" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--8b2jb-eth0" Oct 29 00:42:46.689705 containerd[1632]: 2025-10-29 00:42:46.654 [INFO][4904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7728e48852 ContainerID="4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-8b2jb" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--8b2jb-eth0" Oct 29 00:42:46.689705 containerd[1632]: 2025-10-29 00:42:46.658 [INFO][4904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-8b2jb" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--8b2jb-eth0" Oct 29 00:42:46.689705 containerd[1632]: 2025-10-29 00:42:46.662 [INFO][4904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-8b2jb" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--8b2jb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579cf9b788--8b2jb-eth0", GenerateName:"calico-apiserver-579cf9b788-", Namespace:"calico-apiserver", SelfLink:"", UID:"67dad18a-63e2-479c-bc13-d9830637f19e", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579cf9b788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37", Pod:"calico-apiserver-579cf9b788-8b2jb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib7728e48852", MAC:"d6:c9:d8:49:2e:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:46.689705 containerd[1632]: 2025-10-29 00:42:46.683 [INFO][4904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" Namespace="calico-apiserver" Pod="calico-apiserver-579cf9b788-8b2jb" WorkloadEndpoint="localhost-k8s-calico--apiserver--579cf9b788--8b2jb-eth0" Oct 29 00:42:46.717799 containerd[1632]: time="2025-10-29T00:42:46.717735214Z" level=info msg="connecting to shim 4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37" address="unix:///run/containerd/s/630577d88450cbae47d9c85d86234bbd18f2e9c1fe855510fb7c8c1de71a0f72" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:46.759112 sshd[4932]: Connection closed by 10.0.0.1 port 54224 Oct 29 00:42:46.757525 sshd-session[4894]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:46.765240 systemd[1]: sshd@8-10.0.0.77:22-10.0.0.1:54224.service: Deactivated successfully. Oct 29 00:42:46.771332 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 00:42:46.776920 systemd-logind[1616]: Session 9 logged out. Waiting for processes to exit. Oct 29 00:42:46.780194 systemd-logind[1616]: Removed session 9. Oct 29 00:42:46.797153 systemd[1]: Started cri-containerd-4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37.scope - libcontainer container 4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37. Oct 29 00:42:46.810201 systemd-networkd[1540]: calia1bfd478811: Gained IPv6LL Oct 29 00:42:46.817508 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:46.852069 containerd[1632]: time="2025-10-29T00:42:46.851942092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579cf9b788-8b2jb,Uid:67dad18a-63e2-479c-bc13-d9830637f19e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4f8b249c569d981d404b496b994124553f9217ea442cc5b1e14e9d939ebabf37\"" Oct 29 00:42:46.897413 containerd[1632]: time="2025-10-29T00:42:46.897351275Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:46.898606 containerd[1632]: time="2025-10-29T00:42:46.898574260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 00:42:46.899083 containerd[1632]: time="2025-10-29T00:42:46.898657596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 00:42:46.899142 kubelet[2796]: E1029 00:42:46.898816 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:42:46.899142 kubelet[2796]: E1029 00:42:46.898884 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:42:46.899233 kubelet[2796]: E1029 00:42:46.899138 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdnkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dfhx9_calico-system(06790988-73f1-4592-ba5d-833c8bb13f59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:46.899511 containerd[1632]: time="2025-10-29T00:42:46.899216114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:42:46.900675 kubelet[2796]: E1029 00:42:46.900612 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfhx9" podUID="06790988-73f1-4592-ba5d-833c8bb13f59" Oct 29 00:42:47.257165 systemd-networkd[1540]: calie57163ebcd6: Gained IPv6LL Oct 29 00:42:47.261458 containerd[1632]: time="2025-10-29T00:42:47.261374414Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:47.263855 containerd[1632]: time="2025-10-29T00:42:47.263810416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:42:47.263855 containerd[1632]: time="2025-10-29T00:42:47.263844920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:47.264167 kubelet[2796]: E1029 00:42:47.264117 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:47.264438 kubelet[2796]: E1029 00:42:47.264184 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:47.264438 kubelet[2796]: E1029 00:42:47.264376 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbql9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-579cf9b788-8b2jb_calico-apiserver(67dad18a-63e2-479c-bc13-d9830637f19e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:47.265588 kubelet[2796]: E1029 00:42:47.265547 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579cf9b788-8b2jb" podUID="67dad18a-63e2-479c-bc13-d9830637f19e" Oct 29 00:42:47.385225 systemd-networkd[1540]: cali52693900ad1: Gained IPv6LL Oct 29 00:42:47.648749 kubelet[2796]: E1029 00:42:47.648618 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:47.649390 kubelet[2796]: E1029 00:42:47.649354 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cf97f5b86-fqx7t" podUID="63def325-7646-4955-b342-50757e8ccbe9" Oct 29 00:42:47.649977 kubelet[2796]: E1029 00:42:47.649943 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfhx9" podUID="06790988-73f1-4592-ba5d-833c8bb13f59" Oct 29 00:42:47.650183 kubelet[2796]: E1029 00:42:47.650018 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579cf9b788-8b2jb" podUID="67dad18a-63e2-479c-bc13-d9830637f19e" Oct 29 00:42:47.897227 systemd-networkd[1540]: vxlan.calico: Gained IPv6LL Oct 29 00:42:47.897581 systemd-networkd[1540]: calib7728e48852: Gained IPv6LL Oct 29 00:42:48.649939 kubelet[2796]: E1029 00:42:48.649894 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:48.650792 kubelet[2796]: E1029 00:42:48.650349 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579cf9b788-8b2jb" podUID="67dad18a-63e2-479c-bc13-d9830637f19e" Oct 29 00:42:51.777067 systemd[1]: Started sshd@9-10.0.0.77:22-10.0.0.1:43160.service - OpenSSH per-connection server daemon (10.0.0.1:43160). Oct 29 00:42:51.835176 sshd[5077]: Accepted publickey for core from 10.0.0.1 port 43160 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:51.837049 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:51.841879 systemd-logind[1616]: New session 10 of user core. Oct 29 00:42:51.854207 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 29 00:42:51.973730 sshd[5080]: Connection closed by 10.0.0.1 port 43160 Oct 29 00:42:51.974158 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:51.985133 systemd[1]: sshd@9-10.0.0.77:22-10.0.0.1:43160.service: Deactivated successfully. Oct 29 00:42:51.987224 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 00:42:51.988082 systemd-logind[1616]: Session 10 logged out. Waiting for processes to exit. Oct 29 00:42:51.991069 systemd[1]: Started sshd@10-10.0.0.77:22-10.0.0.1:43174.service - OpenSSH per-connection server daemon (10.0.0.1:43174). Oct 29 00:42:51.991898 systemd-logind[1616]: Removed session 10. Oct 29 00:42:52.052668 sshd[5095]: Accepted publickey for core from 10.0.0.1 port 43174 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:52.054621 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:52.059795 systemd-logind[1616]: New session 11 of user core. Oct 29 00:42:52.067154 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 29 00:42:52.235026 sshd[5104]: Connection closed by 10.0.0.1 port 43174 Oct 29 00:42:52.236284 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:52.248693 systemd[1]: sshd@10-10.0.0.77:22-10.0.0.1:43174.service: Deactivated successfully. Oct 29 00:42:52.251263 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 00:42:52.253170 systemd-logind[1616]: Session 11 logged out. Waiting for processes to exit. Oct 29 00:42:52.257792 systemd[1]: Started sshd@11-10.0.0.77:22-10.0.0.1:43186.service - OpenSSH per-connection server daemon (10.0.0.1:43186). Oct 29 00:42:52.260661 systemd-logind[1616]: Removed session 11. Oct 29 00:42:52.317451 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 43186 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:52.319323 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:52.324634 systemd-logind[1616]: New session 12 of user core. Oct 29 00:42:52.330164 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 29 00:42:52.448825 sshd[5121]: Connection closed by 10.0.0.1 port 43186 Oct 29 00:42:52.449270 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:52.454401 systemd[1]: sshd@11-10.0.0.77:22-10.0.0.1:43186.service: Deactivated successfully. Oct 29 00:42:52.456804 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 00:42:52.457615 systemd-logind[1616]: Session 12 logged out. Waiting for processes to exit. Oct 29 00:42:52.459058 systemd-logind[1616]: Removed session 12. Oct 29 00:42:53.449385 kubelet[2796]: E1029 00:42:53.449335 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:53.450200 containerd[1632]: time="2025-10-29T00:42:53.450118480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5qxng,Uid:5905455c-a441-499c-8f77-8f1bcb5b5830,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:53.800529 systemd-networkd[1540]: cali88a540a8521: Link UP Oct 29 00:42:53.801859 systemd-networkd[1540]: cali88a540a8521: Gained carrier Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.734 [INFO][5138] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--5qxng-eth0 coredns-674b8bbfcf- kube-system 5905455c-a441-499c-8f77-8f1bcb5b5830 860 0 2025-10-29 00:42:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-5qxng eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali88a540a8521 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" Namespace="kube-system" Pod="coredns-674b8bbfcf-5qxng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5qxng-" Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.734 [INFO][5138] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" Namespace="kube-system" Pod="coredns-674b8bbfcf-5qxng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5qxng-eth0" Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.760 [INFO][5152] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" HandleID="k8s-pod-network.98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" Workload="localhost-k8s-coredns--674b8bbfcf--5qxng-eth0" Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.761 [INFO][5152] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" HandleID="k8s-pod-network.98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" Workload="localhost-k8s-coredns--674b8bbfcf--5qxng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325390), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-5qxng", "timestamp":"2025-10-29 00:42:53.760823318 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.761 [INFO][5152] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.761 [INFO][5152] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.761 [INFO][5152] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.767 [INFO][5152] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" host="localhost" Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.772 [INFO][5152] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.776 [INFO][5152] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.778 [INFO][5152] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.780 [INFO][5152] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.780 [INFO][5152] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" host="localhost" Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.782 [INFO][5152] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535 Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.786 [INFO][5152] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" host="localhost" Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.793 [INFO][5152] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" host="localhost" Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.793 [INFO][5152] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" host="localhost" Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.793 [INFO][5152] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:53.818808 containerd[1632]: 2025-10-29 00:42:53.793 [INFO][5152] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" HandleID="k8s-pod-network.98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" Workload="localhost-k8s-coredns--674b8bbfcf--5qxng-eth0" Oct 29 00:42:53.821141 containerd[1632]: 2025-10-29 00:42:53.796 [INFO][5138] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" Namespace="kube-system" Pod="coredns-674b8bbfcf-5qxng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5qxng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--5qxng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5905455c-a441-499c-8f77-8f1bcb5b5830", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-5qxng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali88a540a8521", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:53.821141 containerd[1632]: 2025-10-29 00:42:53.796 [INFO][5138] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" Namespace="kube-system" Pod="coredns-674b8bbfcf-5qxng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5qxng-eth0" Oct 29 00:42:53.821141 containerd[1632]: 2025-10-29 00:42:53.796 [INFO][5138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88a540a8521 ContainerID="98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" Namespace="kube-system" Pod="coredns-674b8bbfcf-5qxng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5qxng-eth0" Oct 29 00:42:53.821141 containerd[1632]: 2025-10-29 00:42:53.801 [INFO][5138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" Namespace="kube-system" Pod="coredns-674b8bbfcf-5qxng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5qxng-eth0" Oct 29 00:42:53.821141 containerd[1632]: 2025-10-29 00:42:53.802 [INFO][5138] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" Namespace="kube-system" Pod="coredns-674b8bbfcf-5qxng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5qxng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--5qxng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5905455c-a441-499c-8f77-8f1bcb5b5830", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535", Pod:"coredns-674b8bbfcf-5qxng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali88a540a8521", MAC:"ea:35:f3:2f:ef:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:53.821141 containerd[1632]: 2025-10-29 00:42:53.814 [INFO][5138] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" Namespace="kube-system" Pod="coredns-674b8bbfcf-5qxng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5qxng-eth0" Oct 29 00:42:53.844361 containerd[1632]: time="2025-10-29T00:42:53.844314965Z" level=info msg="connecting to shim 98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535" address="unix:///run/containerd/s/9ac435a661abf66a633f0f911f0c1a000b346c86227bed1fd5277875101995c2" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:53.878226 systemd[1]: Started cri-containerd-98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535.scope - libcontainer container 98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535. Oct 29 00:42:53.895741 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:53.937853 containerd[1632]: time="2025-10-29T00:42:53.937794820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5qxng,Uid:5905455c-a441-499c-8f77-8f1bcb5b5830,Namespace:kube-system,Attempt:0,} returns sandbox id \"98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535\"" Oct 29 00:42:53.938823 kubelet[2796]: E1029 00:42:53.938793 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:53.944031 containerd[1632]: time="2025-10-29T00:42:53.943970354Z" level=info msg="CreateContainer within sandbox \"98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 00:42:53.954264 containerd[1632]: time="2025-10-29T00:42:53.954211842Z" level=info msg="Container a08daa5f4f77da96123697b0627fae45e022d2898a8689417a1285222cb212ac: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:42:53.965742 containerd[1632]: time="2025-10-29T00:42:53.965689426Z" level=info msg="CreateContainer within sandbox \"98a4153980158dd52f749b4d86b0d17887bfd14a5cafafacbb7bf62ac4179535\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a08daa5f4f77da96123697b0627fae45e022d2898a8689417a1285222cb212ac\"" Oct 29 00:42:53.974235 containerd[1632]: time="2025-10-29T00:42:53.974172194Z" level=info msg="StartContainer for \"a08daa5f4f77da96123697b0627fae45e022d2898a8689417a1285222cb212ac\"" Oct 29 00:42:53.975170 containerd[1632]: time="2025-10-29T00:42:53.975125433Z" level=info msg="connecting to shim a08daa5f4f77da96123697b0627fae45e022d2898a8689417a1285222cb212ac" address="unix:///run/containerd/s/9ac435a661abf66a633f0f911f0c1a000b346c86227bed1fd5277875101995c2" protocol=ttrpc version=3 Oct 29 00:42:54.003212 systemd[1]: Started cri-containerd-a08daa5f4f77da96123697b0627fae45e022d2898a8689417a1285222cb212ac.scope - libcontainer container a08daa5f4f77da96123697b0627fae45e022d2898a8689417a1285222cb212ac. Oct 29 00:42:54.044630 containerd[1632]: time="2025-10-29T00:42:54.044563765Z" level=info msg="StartContainer for \"a08daa5f4f77da96123697b0627fae45e022d2898a8689417a1285222cb212ac\" returns successfully" Oct 29 00:42:54.670468 kubelet[2796]: E1029 00:42:54.670342 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:54.702431 kubelet[2796]: I1029 00:42:54.701165 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5qxng" podStartSLOduration=54.701137091 podStartE2EDuration="54.701137091s" podCreationTimestamp="2025-10-29 00:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:42:54.683723036 +0000 UTC m=+61.338171017" watchObservedRunningTime="2025-10-29 00:42:54.701137091 +0000 UTC m=+61.355585042" Oct 29 00:42:55.129286 systemd-networkd[1540]: cali88a540a8521: Gained IPv6LL Oct 29 00:42:55.671406 kubelet[2796]: E1029 00:42:55.671366 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:56.450009 containerd[1632]: time="2025-10-29T00:42:56.449814868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:42:56.673290 kubelet[2796]: E1029 00:42:56.673221 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:56.780713 containerd[1632]: time="2025-10-29T00:42:56.780556678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:56.906105 containerd[1632]: time="2025-10-29T00:42:56.906018189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:42:56.906323 containerd[1632]: time="2025-10-29T00:42:56.906076703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:42:56.906456 kubelet[2796]: E1029 00:42:56.906397 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:42:56.906531 kubelet[2796]: E1029 00:42:56.906458 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:42:56.906642 kubelet[2796]: E1029 00:42:56.906587 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1760c0aab9cf4aae903ec89f085f66b1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nxwlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76ccf55dd8-7n2c9_calico-system(ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:56.908677 containerd[1632]: time="2025-10-29T00:42:56.908632135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:42:57.398193 containerd[1632]: time="2025-10-29T00:42:57.398120235Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:57.399326 containerd[1632]: time="2025-10-29T00:42:57.399264525Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:42:57.399442 containerd[1632]: time="2025-10-29T00:42:57.399372485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:42:57.399560 kubelet[2796]: E1029 00:42:57.399508 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:42:57.399622 kubelet[2796]: E1029 00:42:57.399571 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:42:57.399767 kubelet[2796]: E1029 00:42:57.399723 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxwlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76ccf55dd8-7n2c9_calico-system(ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:57.400962 kubelet[2796]: E1029 00:42:57.400896 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76ccf55dd8-7n2c9" podUID="ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0" Oct 29 00:42:57.450113 containerd[1632]: time="2025-10-29T00:42:57.450056677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:42:57.473522 systemd[1]: Started sshd@12-10.0.0.77:22-10.0.0.1:43190.service - OpenSSH per-connection server daemon (10.0.0.1:43190). Oct 29 00:42:57.562739 sshd[5264]: Accepted publickey for core from 10.0.0.1 port 43190 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:57.564848 sshd-session[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:57.569697 systemd-logind[1616]: New session 13 of user core. Oct 29 00:42:57.582130 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 29 00:42:57.705176 sshd[5267]: Connection closed by 10.0.0.1 port 43190 Oct 29 00:42:57.705436 sshd-session[5264]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:57.710315 systemd[1]: sshd@12-10.0.0.77:22-10.0.0.1:43190.service: Deactivated successfully. Oct 29 00:42:57.712709 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 00:42:57.713646 systemd-logind[1616]: Session 13 logged out. Waiting for processes to exit. Oct 29 00:42:57.715116 systemd-logind[1616]: Removed session 13. Oct 29 00:42:57.771945 containerd[1632]: time="2025-10-29T00:42:57.771867068Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:57.773146 containerd[1632]: time="2025-10-29T00:42:57.773110862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:42:57.773233 containerd[1632]: time="2025-10-29T00:42:57.773141611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:57.773472 kubelet[2796]: E1029 00:42:57.773406 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:57.773869 kubelet[2796]: E1029 00:42:57.773480 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:57.773869 kubelet[2796]: E1029 00:42:57.773670 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csmsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6555bc8b57-6t6f2_calico-apiserver(e6e3d24d-0964-48c5-ab21-4abb2f93d132): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:57.775700 kubelet[2796]: E1029 00:42:57.775649 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6555bc8b57-6t6f2" podUID="e6e3d24d-0964-48c5-ab21-4abb2f93d132" Oct 29 00:42:58.451350 containerd[1632]: time="2025-10-29T00:42:58.451239428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:42:58.811098 containerd[1632]: time="2025-10-29T00:42:58.810912488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:58.812710 containerd[1632]: time="2025-10-29T00:42:58.812663587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:42:58.812779 containerd[1632]: time="2025-10-29T00:42:58.812728413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:42:58.813039 kubelet[2796]: E1029 00:42:58.812966 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:42:58.813447 kubelet[2796]: E1029 00:42:58.813052 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:42:58.813447 kubelet[2796]: E1029 00:42:58.813326 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfgbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cf97f5b86-fqx7t_calico-system(63def325-7646-4955-b342-50757e8ccbe9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:58.813960 containerd[1632]: time="2025-10-29T00:42:58.813889855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:42:58.814934 kubelet[2796]: E1029 00:42:58.814894 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cf97f5b86-fqx7t" podUID="63def325-7646-4955-b342-50757e8ccbe9" Oct 29 00:42:59.147871 containerd[1632]: time="2025-10-29T00:42:59.147798261Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:59.149067 containerd[1632]: time="2025-10-29T00:42:59.149004548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:42:59.149155 containerd[1632]: time="2025-10-29T00:42:59.149122125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:59.149361 kubelet[2796]: E1029 00:42:59.149290 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:42:59.149434 kubelet[2796]: E1029 00:42:59.149361 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:42:59.149566 kubelet[2796]: E1029 00:42:59.149522 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xghbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jvtk4_calico-system(94b96309-8719-4f92-83c6-e3ea53662334): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:59.150729 kubelet[2796]: E1029 00:42:59.150684 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jvtk4" podUID="94b96309-8719-4f92-83c6-e3ea53662334" Oct 29 00:42:59.449655 containerd[1632]: time="2025-10-29T00:42:59.449455387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:42:59.780262 containerd[1632]: time="2025-10-29T00:42:59.780127387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:59.781358 containerd[1632]: time="2025-10-29T00:42:59.781298586Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:42:59.781431 containerd[1632]: time="2025-10-29T00:42:59.781398088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:59.781588 kubelet[2796]: E1029 00:42:59.781531 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:59.781637 kubelet[2796]: E1029 00:42:59.781591 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:59.781785 kubelet[2796]: E1029 00:42:59.781742 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrp5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-579cf9b788-p778f_calico-apiserver(0d9ba357-e9fe-4334-aa42-2c44f212b5ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:59.783084 kubelet[2796]: E1029 00:42:59.783032 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579cf9b788-p778f" podUID="0d9ba357-e9fe-4334-aa42-2c44f212b5ae" Oct 29 00:43:00.449470 containerd[1632]: time="2025-10-29T00:43:00.449426939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 00:43:00.810399 containerd[1632]: time="2025-10-29T00:43:00.810246592Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:00.811429 containerd[1632]: time="2025-10-29T00:43:00.811372062Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 00:43:00.811583 containerd[1632]: time="2025-10-29T00:43:00.811454160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 00:43:00.811662 kubelet[2796]: E1029 00:43:00.811615 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:43:00.812010 kubelet[2796]: E1029 00:43:00.811679 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:43:00.812010 kubelet[2796]: E1029 00:43:00.811825 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdnkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dfhx9_calico-system(06790988-73f1-4592-ba5d-833c8bb13f59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:00.814763 containerd[1632]: time="2025-10-29T00:43:00.814708363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 00:43:01.128120 containerd[1632]: time="2025-10-29T00:43:01.128047168Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:01.129302 containerd[1632]: time="2025-10-29T00:43:01.129271636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 00:43:01.129413 containerd[1632]: time="2025-10-29T00:43:01.129364886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 00:43:01.129575 kubelet[2796]: E1029 00:43:01.129519 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:43:01.129632 kubelet[2796]: E1029 00:43:01.129578 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:43:01.129761 kubelet[2796]: E1029 00:43:01.129725 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdnkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dfhx9_calico-system(06790988-73f1-4592-ba5d-833c8bb13f59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:01.131087 kubelet[2796]: E1029 00:43:01.130961 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfhx9" podUID="06790988-73f1-4592-ba5d-833c8bb13f59" Oct 29 00:43:02.449379 containerd[1632]: time="2025-10-29T00:43:02.449320344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:43:02.729043 systemd[1]: Started sshd@13-10.0.0.77:22-10.0.0.1:57474.service - OpenSSH per-connection server daemon (10.0.0.1:57474). Oct 29 00:43:02.789271 sshd[5288]: Accepted publickey for core from 10.0.0.1 port 57474 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:02.790630 sshd-session[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:02.794704 containerd[1632]: time="2025-10-29T00:43:02.794643311Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:02.795134 systemd-logind[1616]: New session 14 of user core. Oct 29 00:43:02.805122 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 29 00:43:02.843375 containerd[1632]: time="2025-10-29T00:43:02.843323036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:43:02.843375 containerd[1632]: time="2025-10-29T00:43:02.843351391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:43:02.843685 kubelet[2796]: E1029 00:43:02.843619 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:43:02.844090 kubelet[2796]: E1029 00:43:02.843685 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:43:02.844090 kubelet[2796]: E1029 00:43:02.843833 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbql9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-579cf9b788-8b2jb_calico-apiserver(67dad18a-63e2-479c-bc13-d9830637f19e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:02.845092 kubelet[2796]: E1029 00:43:02.845052 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579cf9b788-8b2jb" podUID="67dad18a-63e2-479c-bc13-d9830637f19e" Oct 29 00:43:02.922288 sshd[5291]: Connection closed by 10.0.0.1 port 57474 Oct 29 00:43:02.922808 sshd-session[5288]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:02.927361 systemd[1]: sshd@13-10.0.0.77:22-10.0.0.1:57474.service: Deactivated successfully. Oct 29 00:43:02.929634 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 00:43:02.931216 systemd-logind[1616]: Session 14 logged out. Waiting for processes to exit. Oct 29 00:43:02.933331 systemd-logind[1616]: Removed session 14. Oct 29 00:43:04.448133 kubelet[2796]: E1029 00:43:04.448088 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:07.939046 systemd[1]: Started sshd@14-10.0.0.77:22-10.0.0.1:57482.service - OpenSSH per-connection server daemon (10.0.0.1:57482). Oct 29 00:43:08.003461 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 57482 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:08.004774 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:08.009220 systemd-logind[1616]: New session 15 of user core. Oct 29 00:43:08.018125 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 29 00:43:08.143013 sshd[5317]: Connection closed by 10.0.0.1 port 57482 Oct 29 00:43:08.143339 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:08.147305 systemd[1]: sshd@14-10.0.0.77:22-10.0.0.1:57482.service: Deactivated successfully. Oct 29 00:43:08.149516 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 00:43:08.150470 systemd-logind[1616]: Session 15 logged out. Waiting for processes to exit. Oct 29 00:43:08.151738 systemd-logind[1616]: Removed session 15. Oct 29 00:43:09.448094 kubelet[2796]: E1029 00:43:09.447826 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:09.449462 kubelet[2796]: E1029 00:43:09.449358 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76ccf55dd8-7n2c9" podUID="ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0" Oct 29 00:43:10.448283 kubelet[2796]: E1029 00:43:10.448174 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:10.449554 kubelet[2796]: E1029 00:43:10.449456 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6555bc8b57-6t6f2" podUID="e6e3d24d-0964-48c5-ab21-4abb2f93d132" Oct 29 00:43:12.448936 kubelet[2796]: E1029 00:43:12.448889 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jvtk4" podUID="94b96309-8719-4f92-83c6-e3ea53662334" Oct 29 00:43:12.448936 kubelet[2796]: E1029 00:43:12.448889 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cf97f5b86-fqx7t" podUID="63def325-7646-4955-b342-50757e8ccbe9" Oct 29 00:43:12.676050 containerd[1632]: time="2025-10-29T00:43:12.675984943Z" level=info msg="TaskExit event in podsandbox handler container_id:\"969a8899f9df5d803af8a3b060eadac64d8c2068eb3bedf0348f51d12260d096\" id:\"173839947610ea0f71c8292de7d840678111ca9bb82c4d244f4884ae7d2dfe45\" pid:5341 exited_at:{seconds:1761698592 nanos:675624985}" Oct 29 00:43:12.679328 kubelet[2796]: E1029 00:43:12.679301 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:13.160647 systemd[1]: Started sshd@15-10.0.0.77:22-10.0.0.1:50586.service - OpenSSH per-connection server daemon (10.0.0.1:50586). Oct 29 00:43:13.236852 sshd[5354]: Accepted publickey for core from 10.0.0.1 port 50586 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:13.238657 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:13.243911 systemd-logind[1616]: New session 16 of user core. Oct 29 00:43:13.257170 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 29 00:43:13.382980 sshd[5357]: Connection closed by 10.0.0.1 port 50586 Oct 29 00:43:13.383325 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:13.392899 systemd[1]: sshd@15-10.0.0.77:22-10.0.0.1:50586.service: Deactivated successfully. Oct 29 00:43:13.395153 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 00:43:13.396081 systemd-logind[1616]: Session 16 logged out. Waiting for processes to exit. Oct 29 00:43:13.399641 systemd[1]: Started sshd@16-10.0.0.77:22-10.0.0.1:50602.service - OpenSSH per-connection server daemon (10.0.0.1:50602). Oct 29 00:43:13.400882 systemd-logind[1616]: Removed session 16. Oct 29 00:43:13.449799 kubelet[2796]: E1029 00:43:13.449441 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579cf9b788-p778f" podUID="0d9ba357-e9fe-4334-aa42-2c44f212b5ae" Oct 29 00:43:13.454303 sshd[5371]: Accepted publickey for core from 10.0.0.1 port 50602 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:13.456089 sshd-session[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:13.461861 systemd-logind[1616]: New session 17 of user core. Oct 29 00:43:13.470164 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 29 00:43:13.762644 sshd[5374]: Connection closed by 10.0.0.1 port 50602 Oct 29 00:43:13.764669 sshd-session[5371]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:13.773778 systemd[1]: sshd@16-10.0.0.77:22-10.0.0.1:50602.service: Deactivated successfully. Oct 29 00:43:13.775788 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 00:43:13.776719 systemd-logind[1616]: Session 17 logged out. Waiting for processes to exit. Oct 29 00:43:13.779873 systemd[1]: Started sshd@17-10.0.0.77:22-10.0.0.1:50610.service - OpenSSH per-connection server daemon (10.0.0.1:50610). Oct 29 00:43:13.780920 systemd-logind[1616]: Removed session 17. Oct 29 00:43:13.861823 sshd[5386]: Accepted publickey for core from 10.0.0.1 port 50610 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:13.863414 sshd-session[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:13.869802 systemd-logind[1616]: New session 18 of user core. Oct 29 00:43:13.883173 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 29 00:43:14.449684 kubelet[2796]: E1029 00:43:14.449616 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfhx9" podUID="06790988-73f1-4592-ba5d-833c8bb13f59" Oct 29 00:43:14.545416 sshd[5391]: Connection closed by 10.0.0.1 port 50610 Oct 29 00:43:14.548136 sshd-session[5386]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:14.556542 systemd[1]: sshd@17-10.0.0.77:22-10.0.0.1:50610.service: Deactivated successfully. Oct 29 00:43:14.559087 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 00:43:14.561054 systemd-logind[1616]: Session 18 logged out. Waiting for processes to exit. Oct 29 00:43:14.568365 systemd[1]: Started sshd@18-10.0.0.77:22-10.0.0.1:50612.service - OpenSSH per-connection server daemon (10.0.0.1:50612). Oct 29 00:43:14.569309 systemd-logind[1616]: Removed session 18. Oct 29 00:43:14.625716 sshd[5409]: Accepted publickey for core from 10.0.0.1 port 50612 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:14.627327 sshd-session[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:14.631749 systemd-logind[1616]: New session 19 of user core. Oct 29 00:43:14.650162 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 29 00:43:14.934654 sshd[5412]: Connection closed by 10.0.0.1 port 50612 Oct 29 00:43:14.935560 sshd-session[5409]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:14.944757 systemd[1]: sshd@18-10.0.0.77:22-10.0.0.1:50612.service: Deactivated successfully. Oct 29 00:43:14.948078 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 00:43:14.949217 systemd-logind[1616]: Session 19 logged out. Waiting for processes to exit. Oct 29 00:43:14.952672 systemd[1]: Started sshd@19-10.0.0.77:22-10.0.0.1:50620.service - OpenSSH per-connection server daemon (10.0.0.1:50620). Oct 29 00:43:14.954506 systemd-logind[1616]: Removed session 19. Oct 29 00:43:15.006453 sshd[5425]: Accepted publickey for core from 10.0.0.1 port 50620 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:15.008814 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:15.018046 systemd-logind[1616]: New session 20 of user core. Oct 29 00:43:15.027229 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 29 00:43:15.161716 sshd[5428]: Connection closed by 10.0.0.1 port 50620 Oct 29 00:43:15.162245 sshd-session[5425]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:15.168078 systemd-logind[1616]: Session 20 logged out. Waiting for processes to exit. Oct 29 00:43:15.168282 systemd[1]: sshd@19-10.0.0.77:22-10.0.0.1:50620.service: Deactivated successfully. Oct 29 00:43:15.170549 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 00:43:15.173812 systemd-logind[1616]: Removed session 20. Oct 29 00:43:17.449274 kubelet[2796]: E1029 00:43:17.449211 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579cf9b788-8b2jb" podUID="67dad18a-63e2-479c-bc13-d9830637f19e" Oct 29 00:43:18.448358 kubelet[2796]: E1029 00:43:18.448299 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:20.173541 systemd[1]: Started sshd@20-10.0.0.77:22-10.0.0.1:51358.service - OpenSSH per-connection server daemon (10.0.0.1:51358). Oct 29 00:43:20.241858 sshd[5444]: Accepted publickey for core from 10.0.0.1 port 51358 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:20.244565 sshd-session[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:20.250241 systemd-logind[1616]: New session 21 of user core. Oct 29 00:43:20.261120 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 29 00:43:20.415376 sshd[5447]: Connection closed by 10.0.0.1 port 51358 Oct 29 00:43:20.415720 sshd-session[5444]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:20.419052 systemd[1]: sshd@20-10.0.0.77:22-10.0.0.1:51358.service: Deactivated successfully. Oct 29 00:43:20.421070 systemd[1]: session-21.scope: Deactivated successfully. Oct 29 00:43:20.423060 systemd-logind[1616]: Session 21 logged out. Waiting for processes to exit. Oct 29 00:43:20.423998 systemd-logind[1616]: Removed session 21. Oct 29 00:43:21.449754 containerd[1632]: time="2025-10-29T00:43:21.449687376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:43:21.765735 containerd[1632]: time="2025-10-29T00:43:21.765595582Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:21.813966 containerd[1632]: time="2025-10-29T00:43:21.813635541Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:43:21.814285 kubelet[2796]: E1029 00:43:21.814049 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:43:21.814708 kubelet[2796]: E1029 00:43:21.814301 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:43:21.814965 kubelet[2796]: E1029 00:43:21.814904 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csmsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6555bc8b57-6t6f2_calico-apiserver(e6e3d24d-0964-48c5-ab21-4abb2f93d132): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:21.816083 kubelet[2796]: E1029 00:43:21.816049 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6555bc8b57-6t6f2" podUID="e6e3d24d-0964-48c5-ab21-4abb2f93d132" Oct 29 00:43:21.818926 containerd[1632]: time="2025-10-29T00:43:21.813662703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:43:23.450595 containerd[1632]: time="2025-10-29T00:43:23.450526746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:43:23.812100 containerd[1632]: time="2025-10-29T00:43:23.811895890Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:23.813333 containerd[1632]: time="2025-10-29T00:43:23.813242724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:43:23.813400 containerd[1632]: time="2025-10-29T00:43:23.813360819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:43:23.813629 kubelet[2796]: E1029 00:43:23.813552 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:43:23.813629 kubelet[2796]: E1029 00:43:23.813633 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:43:23.814114 kubelet[2796]: E1029 00:43:23.813781 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1760c0aab9cf4aae903ec89f085f66b1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nxwlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76ccf55dd8-7n2c9_calico-system(ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:23.815902 containerd[1632]: time="2025-10-29T00:43:23.815876510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:43:24.142953 containerd[1632]: time="2025-10-29T00:43:24.142873424Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:24.144171 containerd[1632]: time="2025-10-29T00:43:24.144114677Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:43:24.144437 containerd[1632]: time="2025-10-29T00:43:24.144211852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:43:24.144480 kubelet[2796]: E1029 00:43:24.144417 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:43:24.144480 kubelet[2796]: E1029 00:43:24.144476 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:43:24.144880 kubelet[2796]: E1029 00:43:24.144813 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxwlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76ccf55dd8-7n2c9_calico-system(ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:24.146631 kubelet[2796]: E1029 00:43:24.146551 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76ccf55dd8-7n2c9" podUID="ec6228cd-f4f8-4d8b-9e13-5218fd64e5d0" Oct 29 00:43:24.450267 containerd[1632]: time="2025-10-29T00:43:24.450113073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:43:24.792448 containerd[1632]: time="2025-10-29T00:43:24.792297481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:24.793855 containerd[1632]: time="2025-10-29T00:43:24.793773220Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:43:24.793932 containerd[1632]: time="2025-10-29T00:43:24.793795151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:43:24.794236 kubelet[2796]: E1029 00:43:24.794181 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:43:24.794383 kubelet[2796]: E1029 00:43:24.794246 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:43:24.794533 kubelet[2796]: E1029 00:43:24.794486 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrp5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-579cf9b788-p778f_calico-apiserver(0d9ba357-e9fe-4334-aa42-2c44f212b5ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:24.795551 containerd[1632]: time="2025-10-29T00:43:24.795186150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:43:24.796659 kubelet[2796]: E1029 00:43:24.796473 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579cf9b788-p778f" podUID="0d9ba357-e9fe-4334-aa42-2c44f212b5ae" Oct 29 00:43:25.100787 containerd[1632]: time="2025-10-29T00:43:25.100712911Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:25.192513 containerd[1632]: time="2025-10-29T00:43:25.192342243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:43:25.192513 containerd[1632]: time="2025-10-29T00:43:25.192398039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:43:25.192763 kubelet[2796]: E1029 00:43:25.192667 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:43:25.192763 kubelet[2796]: E1029 00:43:25.192727 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:43:25.197811 kubelet[2796]: E1029 00:43:25.197705 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xghbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jvtk4_calico-system(94b96309-8719-4f92-83c6-e3ea53662334): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:25.198964 kubelet[2796]: E1029 00:43:25.198927 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jvtk4" podUID="94b96309-8719-4f92-83c6-e3ea53662334" Oct 29 00:43:25.428100 systemd[1]: Started sshd@21-10.0.0.77:22-10.0.0.1:51374.service - OpenSSH per-connection server daemon (10.0.0.1:51374). Oct 29 00:43:25.450874 containerd[1632]: time="2025-10-29T00:43:25.450816824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:43:25.495532 sshd[5460]: Accepted publickey for core from 10.0.0.1 port 51374 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:25.497520 sshd-session[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:25.502406 systemd-logind[1616]: New session 22 of user core. Oct 29 00:43:25.513141 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 29 00:43:25.635221 sshd[5463]: Connection closed by 10.0.0.1 port 51374 Oct 29 00:43:25.635691 sshd-session[5460]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:25.642025 systemd[1]: sshd@21-10.0.0.77:22-10.0.0.1:51374.service: Deactivated successfully. Oct 29 00:43:25.644848 systemd[1]: session-22.scope: Deactivated successfully. Oct 29 00:43:25.645772 systemd-logind[1616]: Session 22 logged out. Waiting for processes to exit. Oct 29 00:43:25.647239 systemd-logind[1616]: Removed session 22. Oct 29 00:43:25.827031 containerd[1632]: time="2025-10-29T00:43:25.825571854Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:25.831021 containerd[1632]: time="2025-10-29T00:43:25.828723712Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:43:25.831135 containerd[1632]: time="2025-10-29T00:43:25.828937128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:43:25.831530 kubelet[2796]: E1029 00:43:25.831471 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:43:25.831605 kubelet[2796]: E1029 00:43:25.831543 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:43:25.832172 kubelet[2796]: E1029 00:43:25.832112 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfgbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cf97f5b86-fqx7t_calico-system(63def325-7646-4955-b342-50757e8ccbe9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:25.833320 kubelet[2796]: E1029 00:43:25.833287 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cf97f5b86-fqx7t" podUID="63def325-7646-4955-b342-50757e8ccbe9" Oct 29 00:43:26.449054 containerd[1632]: time="2025-10-29T00:43:26.448944543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 00:43:26.811356 containerd[1632]: time="2025-10-29T00:43:26.811197552Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:26.812551 containerd[1632]: time="2025-10-29T00:43:26.812491813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 00:43:26.812617 containerd[1632]: time="2025-10-29T00:43:26.812574171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 00:43:26.812826 kubelet[2796]: E1029 00:43:26.812772 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:43:26.813176 kubelet[2796]: E1029 00:43:26.812830 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:43:26.813176 kubelet[2796]: E1029 00:43:26.812975 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdnkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dfhx9_calico-system(06790988-73f1-4592-ba5d-833c8bb13f59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:26.815435 containerd[1632]: time="2025-10-29T00:43:26.815406999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 00:43:27.229455 containerd[1632]: time="2025-10-29T00:43:27.229394038Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:27.231068 containerd[1632]: time="2025-10-29T00:43:27.231019548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 00:43:27.231121 containerd[1632]: time="2025-10-29T00:43:27.231025490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 00:43:27.231427 kubelet[2796]: E1029 00:43:27.231370 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:43:27.231493 kubelet[2796]: E1029 00:43:27.231444 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:43:27.231721 kubelet[2796]: E1029 00:43:27.231629 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdnkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dfhx9_calico-system(06790988-73f1-4592-ba5d-833c8bb13f59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:27.233166 kubelet[2796]: E1029 00:43:27.233105 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfhx9" podUID="06790988-73f1-4592-ba5d-833c8bb13f59" Oct 29 00:43:30.648376 systemd[1]: Started sshd@22-10.0.0.77:22-10.0.0.1:52558.service - OpenSSH per-connection server daemon (10.0.0.1:52558). Oct 29 00:43:30.737401 sshd[5482]: Accepted publickey for core from 10.0.0.1 port 52558 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:30.739113 sshd-session[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:30.745816 systemd-logind[1616]: New session 23 of user core. Oct 29 00:43:30.754194 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 29 00:43:30.892140 sshd[5485]: Connection closed by 10.0.0.1 port 52558 Oct 29 00:43:30.892505 sshd-session[5482]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:30.896982 systemd[1]: sshd@22-10.0.0.77:22-10.0.0.1:52558.service: Deactivated successfully. Oct 29 00:43:30.899327 systemd[1]: session-23.scope: Deactivated successfully. Oct 29 00:43:30.900476 systemd-logind[1616]: Session 23 logged out. Waiting for processes to exit. Oct 29 00:43:30.902124 systemd-logind[1616]: Removed session 23. Oct 29 00:43:31.449444 containerd[1632]: time="2025-10-29T00:43:31.449388808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:43:31.795689 containerd[1632]: time="2025-10-29T00:43:31.795523217Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:31.910239 containerd[1632]: time="2025-10-29T00:43:31.910168728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:43:31.910239 containerd[1632]: time="2025-10-29T00:43:31.910236266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:43:31.910439 kubelet[2796]: E1029 00:43:31.910390 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:43:31.910751 kubelet[2796]: E1029 00:43:31.910438 2796 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:43:31.910751 kubelet[2796]: E1029 00:43:31.910587 2796 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbql9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-579cf9b788-8b2jb_calico-apiserver(67dad18a-63e2-479c-bc13-d9830637f19e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:31.911868 kubelet[2796]: E1029 00:43:31.911805 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579cf9b788-8b2jb" podUID="67dad18a-63e2-479c-bc13-d9830637f19e"