Oct 29 00:40:52.630839 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 28 22:31:02 -00 2025 Oct 29 00:40:52.630867 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=54ef1c344b2a47697b32f3227bd37f41d37acb1889c1eaea33b22ce408b7b3ae Oct 29 00:40:52.630876 kernel: BIOS-provided physical RAM map: Oct 29 00:40:52.630884 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 29 00:40:52.630890 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 29 00:40:52.630900 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Oct 29 00:40:52.630908 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 29 00:40:52.630915 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Oct 29 00:40:52.630925 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 29 00:40:52.630932 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 29 00:40:52.630940 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 29 00:40:52.630947 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 29 00:40:52.630953 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 29 00:40:52.630963 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 29 00:40:52.630972 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 29 00:40:52.630979 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 29 00:40:52.630989 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 29 00:40:52.631000 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 00:40:52.631007 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 29 00:40:52.631016 kernel: NX (Execute Disable) protection: active Oct 29 00:40:52.631025 kernel: APIC: Static calls initialized Oct 29 00:40:52.631032 kernel: e820: update [mem 0x9a13d018-0x9a146c57] usable ==> usable Oct 29 00:40:52.631040 kernel: e820: update [mem 0x9a100018-0x9a13ce57] usable ==> usable Oct 29 00:40:52.631048 kernel: extended physical RAM map: Oct 29 00:40:52.631056 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 29 00:40:52.631063 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 29 00:40:52.631071 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Oct 29 00:40:52.631078 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 29 00:40:52.631088 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a100017] usable Oct 29 00:40:52.631096 kernel: reserve setup_data: [mem 0x000000009a100018-0x000000009a13ce57] usable Oct 29 00:40:52.631103 kernel: reserve setup_data: [mem 0x000000009a13ce58-0x000000009a13d017] usable Oct 29 00:40:52.631110 kernel: reserve setup_data: [mem 0x000000009a13d018-0x000000009a146c57] usable Oct 29 00:40:52.631118 kernel: reserve setup_data: [mem 0x000000009a146c58-0x000000009b8ecfff] usable Oct 29 00:40:52.631125 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 29 00:40:52.631133 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 29 00:40:52.631140 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 29 00:40:52.631148 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 29 00:40:52.631155 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 29 00:40:52.631165 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 29 00:40:52.631173 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 29 00:40:52.631184 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 29 00:40:52.631192 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 29 00:40:52.631199 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 00:40:52.631209 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 29 00:40:52.631217 kernel: efi: EFI v2.7 by EDK II Oct 29 00:40:52.631225 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Oct 29 00:40:52.631233 kernel: random: crng init done Oct 29 00:40:52.631240 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Oct 29 00:40:52.631248 kernel: secureboot: Secure boot enabled Oct 29 00:40:52.631256 kernel: SMBIOS 2.8 present. Oct 29 00:40:52.631263 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 29 00:40:52.631271 kernel: DMI: Memory slots populated: 1/1 Oct 29 00:40:52.631281 kernel: Hypervisor detected: KVM Oct 29 00:40:52.631288 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Oct 29 00:40:52.631296 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 29 00:40:52.631315 kernel: kvm-clock: using sched offset of 6023178075 cycles Oct 29 00:40:52.631333 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 29 00:40:52.631342 kernel: tsc: Detected 2794.748 MHz processor Oct 29 00:40:52.631350 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 29 00:40:52.631359 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 29 00:40:52.631367 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Oct 29 00:40:52.631381 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 29 00:40:52.631391 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 29 00:40:52.631401 kernel: Using GB pages for direct mapping Oct 29 00:40:52.631410 kernel: ACPI: Early table checksum verification disabled Oct 29 00:40:52.631418 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Oct 29 00:40:52.631426 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 29 00:40:52.631434 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:40:52.631445 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:40:52.631453 kernel: ACPI: FACS 0x000000009BBDD000 000040 Oct 29 00:40:52.631467 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:40:52.631475 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:40:52.631483 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:40:52.631491 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:40:52.631499 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 29 00:40:52.631510 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Oct 29 00:40:52.631518 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Oct 29 00:40:52.631526 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Oct 29 00:40:52.631535 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Oct 29 00:40:52.631543 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Oct 29 00:40:52.631551 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Oct 29 00:40:52.631559 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Oct 29 00:40:52.631567 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Oct 29 00:40:52.631577 kernel: No NUMA configuration found Oct 29 00:40:52.631586 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Oct 29 00:40:52.631599 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Oct 29 00:40:52.631621 kernel: Zone ranges: Oct 29 00:40:52.631629 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 29 00:40:52.631637 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Oct 29 00:40:52.631645 kernel: Normal empty Oct 29 00:40:52.631656 kernel: Device empty Oct 29 00:40:52.631664 kernel: Movable zone start for each node Oct 29 00:40:52.631672 kernel: Early memory node ranges Oct 29 00:40:52.631680 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Oct 29 00:40:52.631688 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Oct 29 00:40:52.631696 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Oct 29 00:40:52.631704 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Oct 29 00:40:52.631712 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Oct 29 00:40:52.631722 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Oct 29 00:40:52.631730 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 29 00:40:52.631738 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Oct 29 00:40:52.631746 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 29 00:40:52.631755 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 29 00:40:52.631763 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 29 00:40:52.631771 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Oct 29 00:40:52.631781 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 29 00:40:52.631789 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 29 00:40:52.631797 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 29 00:40:52.631805 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 29 00:40:52.631816 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 29 00:40:52.631824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 29 00:40:52.631833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 29 00:40:52.631843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 29 00:40:52.631852 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 29 00:40:52.631860 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 29 00:40:52.631868 kernel: TSC deadline timer available Oct 29 00:40:52.631876 kernel: CPU topo: Max. logical packages: 1 Oct 29 00:40:52.631884 kernel: CPU topo: Max. logical dies: 1 Oct 29 00:40:52.631901 kernel: CPU topo: Max. dies per package: 1 Oct 29 00:40:52.631909 kernel: CPU topo: Max. threads per core: 1 Oct 29 00:40:52.631917 kernel: CPU topo: Num. cores per package: 4 Oct 29 00:40:52.631925 kernel: CPU topo: Num. threads per package: 4 Oct 29 00:40:52.631938 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 29 00:40:52.631946 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 29 00:40:52.631954 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 29 00:40:52.631963 kernel: kvm-guest: setup PV sched yield Oct 29 00:40:52.631973 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 29 00:40:52.631981 kernel: Booting paravirtualized kernel on KVM Oct 29 00:40:52.631990 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 29 00:40:52.631999 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 29 00:40:52.632007 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 29 00:40:52.632015 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 29 00:40:52.632024 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 29 00:40:52.632034 kernel: kvm-guest: PV spinlocks enabled Oct 29 00:40:52.632042 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 29 00:40:52.632052 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=54ef1c344b2a47697b32f3227bd37f41d37acb1889c1eaea33b22ce408b7b3ae Oct 29 00:40:52.632061 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 29 00:40:52.632069 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 00:40:52.632077 kernel: Fallback order for Node 0: 0 Oct 29 00:40:52.632086 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Oct 29 00:40:52.632096 kernel: Policy zone: DMA32 Oct 29 00:40:52.632104 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 00:40:52.632113 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 29 00:40:52.632121 kernel: ftrace: allocating 40092 entries in 157 pages Oct 29 00:40:52.632129 kernel: ftrace: allocated 157 pages with 5 groups Oct 29 00:40:52.632138 kernel: Dynamic Preempt: voluntary Oct 29 00:40:52.632146 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 29 00:40:52.632158 kernel: rcu: RCU event tracing is enabled. Oct 29 00:40:52.632167 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 29 00:40:52.632175 kernel: Trampoline variant of Tasks RCU enabled. Oct 29 00:40:52.632184 kernel: Rude variant of Tasks RCU enabled. Oct 29 00:40:52.632192 kernel: Tracing variant of Tasks RCU enabled. Oct 29 00:40:52.632200 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 00:40:52.632209 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 29 00:40:52.632219 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 00:40:52.632228 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 00:40:52.632238 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 00:40:52.632247 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 29 00:40:52.632255 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 29 00:40:52.632263 kernel: Console: colour dummy device 80x25 Oct 29 00:40:52.632272 kernel: printk: legacy console [ttyS0] enabled Oct 29 00:40:52.632283 kernel: ACPI: Core revision 20240827 Oct 29 00:40:52.632291 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 29 00:40:52.632299 kernel: APIC: Switch to symmetric I/O mode setup Oct 29 00:40:52.632308 kernel: x2apic enabled Oct 29 00:40:52.632316 kernel: APIC: Switched APIC routing to: physical x2apic Oct 29 00:40:52.632325 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 29 00:40:52.632333 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 29 00:40:52.632344 kernel: kvm-guest: setup PV IPIs Oct 29 00:40:52.632352 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 29 00:40:52.632360 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 29 00:40:52.632369 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 29 00:40:52.632377 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 29 00:40:52.632386 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 29 00:40:52.632394 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 29 00:40:52.632405 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 29 00:40:52.632415 kernel: Spectre V2 : Mitigation: Retpolines Oct 29 00:40:52.632424 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 29 00:40:52.632432 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 29 00:40:52.632441 kernel: active return thunk: retbleed_return_thunk Oct 29 00:40:52.632449 kernel: RETBleed: Mitigation: untrained return thunk Oct 29 00:40:52.632463 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 29 00:40:52.632474 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 29 00:40:52.632494 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 29 00:40:52.632511 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 29 00:40:52.632521 kernel: active return thunk: srso_return_thunk Oct 29 00:40:52.632530 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 29 00:40:52.632538 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 29 00:40:52.632547 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 29 00:40:52.632558 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 29 00:40:52.632567 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 29 00:40:52.632575 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 29 00:40:52.632588 kernel: Freeing SMP alternatives memory: 32K Oct 29 00:40:52.632597 kernel: pid_max: default: 32768 minimum: 301 Oct 29 00:40:52.632619 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 29 00:40:52.632627 kernel: landlock: Up and running. Oct 29 00:40:52.632639 kernel: SELinux: Initializing. Oct 29 00:40:52.632648 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 00:40:52.632656 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 00:40:52.632665 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 29 00:40:52.632674 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 29 00:40:52.632682 kernel: ... version: 0 Oct 29 00:40:52.632712 kernel: ... bit width: 48 Oct 29 00:40:52.632723 kernel: ... generic registers: 6 Oct 29 00:40:52.632732 kernel: ... value mask: 0000ffffffffffff Oct 29 00:40:52.632740 kernel: ... max period: 00007fffffffffff Oct 29 00:40:52.632749 kernel: ... fixed-purpose events: 0 Oct 29 00:40:52.632757 kernel: ... event mask: 000000000000003f Oct 29 00:40:52.632765 kernel: signal: max sigframe size: 1776 Oct 29 00:40:52.632773 kernel: rcu: Hierarchical SRCU implementation. Oct 29 00:40:52.632782 kernel: rcu: Max phase no-delay instances is 400. Oct 29 00:40:52.632793 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 29 00:40:52.632801 kernel: smp: Bringing up secondary CPUs ... Oct 29 00:40:52.632810 kernel: smpboot: x86: Booting SMP configuration: Oct 29 00:40:52.632818 kernel: .... node #0, CPUs: #1 #2 #3 Oct 29 00:40:52.632827 kernel: smp: Brought up 1 node, 4 CPUs Oct 29 00:40:52.632835 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 29 00:40:52.632844 kernel: Memory: 2431740K/2552216K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114536K reserved, 0K cma-reserved) Oct 29 00:40:52.632854 kernel: devtmpfs: initialized Oct 29 00:40:52.632863 kernel: x86/mm: Memory block size: 128MB Oct 29 00:40:52.632871 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Oct 29 00:40:52.632880 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Oct 29 00:40:52.632888 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 00:40:52.632896 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 29 00:40:52.632905 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 00:40:52.632915 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 00:40:52.632924 kernel: audit: initializing netlink subsys (disabled) Oct 29 00:40:52.632932 kernel: audit: type=2000 audit(1761698449.422:1): state=initialized audit_enabled=0 res=1 Oct 29 00:40:52.632941 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 00:40:52.632949 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 29 00:40:52.632957 kernel: cpuidle: using governor menu Oct 29 00:40:52.632966 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 00:40:52.632976 kernel: dca service started, version 1.12.1 Oct 29 00:40:52.632985 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 29 00:40:52.632994 kernel: PCI: Using configuration type 1 for base access Oct 29 00:40:52.633004 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 29 00:40:52.633015 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 00:40:52.633025 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 29 00:40:52.633036 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 00:40:52.633049 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 29 00:40:52.633059 kernel: ACPI: Added _OSI(Module Device) Oct 29 00:40:52.633069 kernel: ACPI: Added _OSI(Processor Device) Oct 29 00:40:52.633080 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 00:40:52.633090 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 00:40:52.633100 kernel: ACPI: Interpreter enabled Oct 29 00:40:52.633111 kernel: ACPI: PM: (supports S0 S5) Oct 29 00:40:52.633124 kernel: ACPI: Using IOAPIC for interrupt routing Oct 29 00:40:52.633135 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 29 00:40:52.633145 kernel: PCI: Using E820 reservations for host bridge windows Oct 29 00:40:52.633156 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 29 00:40:52.633166 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 29 00:40:52.633431 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 00:40:52.633645 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 29 00:40:52.633828 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 29 00:40:52.633840 kernel: PCI host bridge to bus 0000:00 Oct 29 00:40:52.634043 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 29 00:40:52.634207 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 29 00:40:52.634366 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 29 00:40:52.634539 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 29 00:40:52.634719 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 29 00:40:52.634879 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 29 00:40:52.635037 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 29 00:40:52.635228 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 29 00:40:52.635417 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 29 00:40:52.635600 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 29 00:40:52.636046 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 29 00:40:52.636219 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 29 00:40:52.636389 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 29 00:40:52.636581 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 29 00:40:52.636786 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 29 00:40:52.636960 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 29 00:40:52.637132 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 29 00:40:52.637314 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 29 00:40:52.637499 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 29 00:40:52.637691 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 29 00:40:52.637872 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 29 00:40:52.638064 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 29 00:40:52.638238 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 29 00:40:52.638410 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 29 00:40:52.638619 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 29 00:40:52.638802 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 29 00:40:52.638990 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 29 00:40:52.639166 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 29 00:40:52.639359 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 29 00:40:52.639547 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 29 00:40:52.639742 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 29 00:40:52.639932 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 29 00:40:52.640110 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 29 00:40:52.640122 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 29 00:40:52.640131 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 29 00:40:52.640140 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 29 00:40:52.640148 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 29 00:40:52.640160 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 29 00:40:52.640169 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 29 00:40:52.640177 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 29 00:40:52.640186 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 29 00:40:52.640194 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 29 00:40:52.640203 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 29 00:40:52.640211 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 29 00:40:52.640222 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 29 00:40:52.640230 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 29 00:40:52.640239 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 29 00:40:52.640247 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 29 00:40:52.640256 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 29 00:40:52.640264 kernel: iommu: Default domain type: Translated Oct 29 00:40:52.640273 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 29 00:40:52.640283 kernel: efivars: Registered efivars operations Oct 29 00:40:52.640292 kernel: PCI: Using ACPI for IRQ routing Oct 29 00:40:52.640300 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 29 00:40:52.640309 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Oct 29 00:40:52.640317 kernel: e820: reserve RAM buffer [mem 0x9a100018-0x9bffffff] Oct 29 00:40:52.640325 kernel: e820: reserve RAM buffer [mem 0x9a13d018-0x9bffffff] Oct 29 00:40:52.640334 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Oct 29 00:40:52.640342 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Oct 29 00:40:52.640531 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 29 00:40:52.640739 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 29 00:40:52.640914 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 29 00:40:52.640926 kernel: vgaarb: loaded Oct 29 00:40:52.640934 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 29 00:40:52.640943 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 29 00:40:52.640956 kernel: clocksource: Switched to clocksource kvm-clock Oct 29 00:40:52.640964 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 00:40:52.640974 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 00:40:52.640984 kernel: pnp: PnP ACPI init Oct 29 00:40:52.641172 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 29 00:40:52.641185 kernel: pnp: PnP ACPI: found 6 devices Oct 29 00:40:52.641194 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 29 00:40:52.641206 kernel: NET: Registered PF_INET protocol family Oct 29 00:40:52.641215 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 29 00:40:52.641224 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 29 00:40:52.641232 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 00:40:52.641241 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 00:40:52.641249 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 29 00:40:52.641258 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 29 00:40:52.641269 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 00:40:52.641277 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 00:40:52.641286 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 00:40:52.641294 kernel: NET: Registered PF_XDP protocol family Oct 29 00:40:52.641478 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 29 00:40:52.641675 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 29 00:40:52.641847 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 29 00:40:52.642009 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 29 00:40:52.642208 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 29 00:40:52.642373 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 29 00:40:52.642545 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 29 00:40:52.642724 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 29 00:40:52.642737 kernel: PCI: CLS 0 bytes, default 64 Oct 29 00:40:52.642750 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 29 00:40:52.642758 kernel: Initialise system trusted keyrings Oct 29 00:40:52.642767 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 29 00:40:52.642776 kernel: Key type asymmetric registered Oct 29 00:40:52.642784 kernel: Asymmetric key parser 'x509' registered Oct 29 00:40:52.642807 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 29 00:40:52.642818 kernel: io scheduler mq-deadline registered Oct 29 00:40:52.642829 kernel: io scheduler kyber registered Oct 29 00:40:52.642838 kernel: io scheduler bfq registered Oct 29 00:40:52.642847 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 29 00:40:52.642856 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 29 00:40:52.642865 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 29 00:40:52.642874 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 29 00:40:52.642883 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 00:40:52.642894 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 29 00:40:52.642903 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 29 00:40:52.642912 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 29 00:40:52.642921 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 29 00:40:52.643103 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 29 00:40:52.643116 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 29 00:40:52.643279 kernel: rtc_cmos 00:04: registered as rtc0 Oct 29 00:40:52.643448 kernel: rtc_cmos 00:04: setting system clock to 2025-10-29T00:40:50 UTC (1761698450) Oct 29 00:40:52.643642 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 29 00:40:52.643655 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 29 00:40:52.643663 kernel: efifb: probing for efifb Oct 29 00:40:52.643672 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 29 00:40:52.643681 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 29 00:40:52.643694 kernel: efifb: scrolling: redraw Oct 29 00:40:52.643702 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 29 00:40:52.643711 kernel: Console: switching to colour frame buffer device 160x50 Oct 29 00:40:52.643722 kernel: fb0: EFI VGA frame buffer device Oct 29 00:40:52.643731 kernel: pstore: Using crash dump compression: deflate Oct 29 00:40:52.643742 kernel: pstore: Registered efi_pstore as persistent store backend Oct 29 00:40:52.643751 kernel: NET: Registered PF_INET6 protocol family Oct 29 00:40:52.643760 kernel: Segment Routing with IPv6 Oct 29 00:40:52.643769 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 00:40:52.643777 kernel: NET: Registered PF_PACKET protocol family Oct 29 00:40:52.643786 kernel: Key type dns_resolver registered Oct 29 00:40:52.643794 kernel: IPI shorthand broadcast: enabled Oct 29 00:40:52.643806 kernel: sched_clock: Marking stable (1482005138, 260441274)->(1802448024, -60001612) Oct 29 00:40:52.643814 kernel: registered taskstats version 1 Oct 29 00:40:52.643823 kernel: Loading compiled-in X.509 certificates Oct 29 00:40:52.643832 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 4eb70affb0e364bb9bcbea2a9416e57c31aed070' Oct 29 00:40:52.643841 kernel: Demotion targets for Node 0: null Oct 29 00:40:52.643849 kernel: Key type .fscrypt registered Oct 29 00:40:52.643858 kernel: Key type fscrypt-provisioning registered Oct 29 00:40:52.643869 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 00:40:52.643878 kernel: ima: Allocated hash algorithm: sha1 Oct 29 00:40:52.643886 kernel: ima: No architecture policies found Oct 29 00:40:52.643895 kernel: clk: Disabling unused clocks Oct 29 00:40:52.643906 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 29 00:40:52.643915 kernel: Write protecting the kernel read-only data: 40960k Oct 29 00:40:52.643923 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 29 00:40:52.643934 kernel: Run /init as init process Oct 29 00:40:52.643943 kernel: with arguments: Oct 29 00:40:52.643952 kernel: /init Oct 29 00:40:52.643960 kernel: with environment: Oct 29 00:40:52.643969 kernel: HOME=/ Oct 29 00:40:52.643978 kernel: TERM=linux Oct 29 00:40:52.643986 kernel: SCSI subsystem initialized Oct 29 00:40:52.643997 kernel: libata version 3.00 loaded. Oct 29 00:40:52.644175 kernel: ahci 0000:00:1f.2: version 3.0 Oct 29 00:40:52.644187 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 29 00:40:52.644463 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 29 00:40:52.644656 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 29 00:40:52.644833 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 29 00:40:52.645063 kernel: scsi host0: ahci Oct 29 00:40:52.645256 kernel: scsi host1: ahci Oct 29 00:40:52.645440 kernel: scsi host2: ahci Oct 29 00:40:52.645673 kernel: scsi host3: ahci Oct 29 00:40:52.645868 kernel: scsi host4: ahci Oct 29 00:40:52.646060 kernel: scsi host5: ahci Oct 29 00:40:52.646078 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 29 00:40:52.646087 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 29 00:40:52.646096 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 29 00:40:52.646105 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 29 00:40:52.646114 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 29 00:40:52.646123 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 29 00:40:52.646134 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 29 00:40:52.646143 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 29 00:40:52.646152 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 29 00:40:52.646161 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 29 00:40:52.646169 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 29 00:40:52.646178 kernel: ata3.00: LPM support broken, forcing max_power Oct 29 00:40:52.646187 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 29 00:40:52.646196 kernel: ata3.00: applying bridge limits Oct 29 00:40:52.646207 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 29 00:40:52.646216 kernel: ata3.00: LPM support broken, forcing max_power Oct 29 00:40:52.646225 kernel: ata3.00: configured for UDMA/100 Oct 29 00:40:52.646435 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 29 00:40:52.646673 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 29 00:40:52.646851 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 29 00:40:52.646868 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 00:40:52.646877 kernel: GPT:16515071 != 27000831 Oct 29 00:40:52.646885 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 00:40:52.646894 kernel: GPT:16515071 != 27000831 Oct 29 00:40:52.646902 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 00:40:52.646911 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 00:40:52.646920 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:40:52.647120 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 29 00:40:52.647133 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 29 00:40:52.647321 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 29 00:40:52.647333 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 00:40:52.647342 kernel: device-mapper: uevent: version 1.0.3 Oct 29 00:40:52.647351 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 29 00:40:52.647364 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 29 00:40:52.647373 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:40:52.647382 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:40:52.647390 kernel: raid6: avx2x4 gen() 28658 MB/s Oct 29 00:40:52.647398 kernel: raid6: avx2x2 gen() 29506 MB/s Oct 29 00:40:52.647407 kernel: raid6: avx2x1 gen() 25082 MB/s Oct 29 00:40:52.647416 kernel: raid6: using algorithm avx2x2 gen() 29506 MB/s Oct 29 00:40:52.647425 kernel: raid6: .... xor() 19374 MB/s, rmw enabled Oct 29 00:40:52.647436 kernel: raid6: using avx2x2 recovery algorithm Oct 29 00:40:52.647445 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:40:52.647461 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:40:52.647470 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:40:52.647479 kernel: xor: automatically using best checksumming function avx Oct 29 00:40:52.647488 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:40:52.647496 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 29 00:40:52.647505 kernel: BTRFS: device fsid c0171910-1eb4-4fd7-b94c-9d6b11be282f devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (176) Oct 29 00:40:52.647516 kernel: BTRFS info (device dm-0): first mount of filesystem c0171910-1eb4-4fd7-b94c-9d6b11be282f Oct 29 00:40:52.647525 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:40:52.647534 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 29 00:40:52.647543 kernel: BTRFS info (device dm-0): enabling free space tree Oct 29 00:40:52.647551 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:40:52.647560 kernel: loop: module loaded Oct 29 00:40:52.647568 kernel: loop0: detected capacity change from 0 to 100120 Oct 29 00:40:52.647579 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 00:40:52.647589 systemd[1]: Successfully made /usr/ read-only. Oct 29 00:40:52.647618 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 00:40:52.647628 systemd[1]: Detected virtualization kvm. Oct 29 00:40:52.647637 systemd[1]: Detected architecture x86-64. Oct 29 00:40:52.647646 systemd[1]: Running in initrd. Oct 29 00:40:52.647659 systemd[1]: No hostname configured, using default hostname. Oct 29 00:40:52.647668 systemd[1]: Hostname set to . Oct 29 00:40:52.647677 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 00:40:52.647686 systemd[1]: Queued start job for default target initrd.target. Oct 29 00:40:52.647696 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 00:40:52.647705 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 00:40:52.647716 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 00:40:52.647727 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 29 00:40:52.647736 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 00:40:52.647746 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 29 00:40:52.647756 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 29 00:40:52.647765 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 00:40:52.647776 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 00:40:52.647786 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 29 00:40:52.647795 systemd[1]: Reached target paths.target - Path Units. Oct 29 00:40:52.647804 systemd[1]: Reached target slices.target - Slice Units. Oct 29 00:40:52.647813 systemd[1]: Reached target swap.target - Swaps. Oct 29 00:40:52.647822 systemd[1]: Reached target timers.target - Timer Units. Oct 29 00:40:52.647831 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 00:40:52.647843 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 00:40:52.647852 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 29 00:40:52.647861 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 29 00:40:52.647870 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 00:40:52.647880 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 00:40:52.647889 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 00:40:52.647900 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 00:40:52.647910 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 29 00:40:52.647919 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 29 00:40:52.647928 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 00:40:52.647937 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 29 00:40:52.647947 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 29 00:40:52.647956 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 00:40:52.647967 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 00:40:52.647977 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 00:40:52.647986 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:40:52.647996 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 29 00:40:52.648008 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 00:40:52.648047 systemd-journald[311]: Collecting audit messages is disabled. Oct 29 00:40:52.648069 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 00:40:52.648081 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 00:40:52.648090 systemd-journald[311]: Journal started Oct 29 00:40:52.648111 systemd-journald[311]: Runtime Journal (/run/log/journal/02d8598a616846248c84e8faf08059c9) is 5.9M, max 47.9M, 41.9M free. Oct 29 00:40:52.650645 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 00:40:52.663663 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 00:40:52.665600 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 00:40:52.673222 kernel: Bridge firewalling registered Oct 29 00:40:52.671751 systemd-modules-load[314]: Inserted module 'br_netfilter' Oct 29 00:40:52.671863 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 00:40:52.679514 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 00:40:52.757569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:40:52.768464 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 29 00:40:52.773482 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 00:40:52.776161 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 00:40:52.778489 systemd-tmpfiles[328]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 29 00:40:52.789969 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 00:40:52.794732 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 00:40:52.798032 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 00:40:52.803671 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 29 00:40:52.808217 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 00:40:52.810834 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 00:40:52.838046 dracut-cmdline[351]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=54ef1c344b2a47697b32f3227bd37f41d37acb1889c1eaea33b22ce408b7b3ae Oct 29 00:40:52.881843 systemd-resolved[354]: Positive Trust Anchors: Oct 29 00:40:52.881864 systemd-resolved[354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 00:40:52.881870 systemd-resolved[354]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 00:40:52.881914 systemd-resolved[354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 00:40:52.912279 systemd-resolved[354]: Defaulting to hostname 'linux'. Oct 29 00:40:52.914670 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 00:40:52.915399 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 00:40:52.998654 kernel: Loading iSCSI transport class v2.0-870. Oct 29 00:40:53.015649 kernel: iscsi: registered transport (tcp) Oct 29 00:40:53.048800 kernel: iscsi: registered transport (qla4xxx) Oct 29 00:40:53.048846 kernel: QLogic iSCSI HBA Driver Oct 29 00:40:53.079277 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 00:40:53.103124 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 00:40:53.105471 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 00:40:53.175786 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 29 00:40:53.179254 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 29 00:40:53.181719 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 29 00:40:53.269295 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 29 00:40:53.272883 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 00:40:53.309081 systemd-udevd[594]: Using default interface naming scheme 'v257'. Oct 29 00:40:53.325618 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 00:40:53.331510 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 29 00:40:53.366740 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 00:40:53.369860 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 00:40:53.375894 dracut-pre-trigger[663]: rd.md=0: removing MD RAID activation Oct 29 00:40:53.413271 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 00:40:53.415560 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 00:40:53.428589 systemd-networkd[702]: lo: Link UP Oct 29 00:40:53.428598 systemd-networkd[702]: lo: Gained carrier Oct 29 00:40:53.429974 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 00:40:53.434557 systemd[1]: Reached target network.target - Network. Oct 29 00:40:53.514493 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 00:40:53.520304 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 29 00:40:53.584181 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 29 00:40:53.605749 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 29 00:40:53.617499 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 29 00:40:53.617563 kernel: cryptd: max_cpu_qlen set to 1000 Oct 29 00:40:53.624756 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 00:40:53.636788 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 29 00:40:53.647021 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 29 00:40:53.660513 kernel: AES CTR mode by8 optimization enabled Oct 29 00:40:53.647977 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:40:53.648300 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:40:53.649232 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:40:53.649906 systemd-networkd[702]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:40:53.649912 systemd-networkd[702]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 00:40:53.650518 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:40:53.652862 systemd-networkd[702]: eth0: Link UP Oct 29 00:40:53.653093 systemd-networkd[702]: eth0: Gained carrier Oct 29 00:40:53.653104 systemd-networkd[702]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:40:53.664390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:40:53.696710 disk-uuid[826]: Primary Header is updated. Oct 29 00:40:53.696710 disk-uuid[826]: Secondary Entries is updated. Oct 29 00:40:53.696710 disk-uuid[826]: Secondary Header is updated. Oct 29 00:40:53.664518 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:40:53.673907 systemd-networkd[702]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 00:40:53.678804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:40:53.715906 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:40:53.770037 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 29 00:40:53.771546 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 00:40:53.775549 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 00:40:53.776187 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 00:40:53.781211 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 29 00:40:53.818166 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 29 00:40:54.738961 disk-uuid[836]: Warning: The kernel is still using the old partition table. Oct 29 00:40:54.738961 disk-uuid[836]: The new table will be used at the next reboot or after you Oct 29 00:40:54.738961 disk-uuid[836]: run partprobe(8) or kpartx(8) Oct 29 00:40:54.738961 disk-uuid[836]: The operation has completed successfully. Oct 29 00:40:54.756091 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 00:40:54.756250 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 29 00:40:54.758158 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 29 00:40:54.794651 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Oct 29 00:40:54.798182 kernel: BTRFS info (device vda6): first mount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:40:54.798219 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:40:54.802147 kernel: BTRFS info (device vda6): turning on async discard Oct 29 00:40:54.802175 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 00:40:54.809648 kernel: BTRFS info (device vda6): last unmount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:40:54.810973 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 29 00:40:54.813082 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 29 00:40:55.119220 ignition[882]: Ignition 2.22.0 Oct 29 00:40:55.119248 ignition[882]: Stage: fetch-offline Oct 29 00:40:55.119402 ignition[882]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:40:55.119456 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:40:55.119668 ignition[882]: parsed url from cmdline: "" Oct 29 00:40:55.119673 ignition[882]: no config URL provided Oct 29 00:40:55.119682 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 00:40:55.119696 ignition[882]: no config at "/usr/lib/ignition/user.ign" Oct 29 00:40:55.119760 ignition[882]: op(1): [started] loading QEMU firmware config module Oct 29 00:40:55.119767 ignition[882]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 29 00:40:55.136859 ignition[882]: op(1): [finished] loading QEMU firmware config module Oct 29 00:40:55.152807 systemd-networkd[702]: eth0: Gained IPv6LL Oct 29 00:40:55.218166 ignition[882]: parsing config with SHA512: c095f4dea623af8729cd3d2783cfada5c6c258a14b23ea20e72f99abe7524892970a78695b119966a8d95467813a59de56200c81be07555bf53934ec8d3ecb21 Oct 29 00:40:55.223926 unknown[882]: fetched base config from "system" Oct 29 00:40:55.223943 unknown[882]: fetched user config from "qemu" Oct 29 00:40:55.224543 ignition[882]: fetch-offline: fetch-offline passed Oct 29 00:40:55.227771 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 00:40:55.224681 ignition[882]: Ignition finished successfully Oct 29 00:40:55.230471 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 29 00:40:55.231434 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 29 00:40:55.277765 ignition[892]: Ignition 2.22.0 Oct 29 00:40:55.277779 ignition[892]: Stage: kargs Oct 29 00:40:55.277970 ignition[892]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:40:55.277980 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:40:55.279059 ignition[892]: kargs: kargs passed Oct 29 00:40:55.284169 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 29 00:40:55.279107 ignition[892]: Ignition finished successfully Oct 29 00:40:55.287257 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 29 00:40:55.463644 ignition[900]: Ignition 2.22.0 Oct 29 00:40:55.463663 ignition[900]: Stage: disks Oct 29 00:40:55.463904 ignition[900]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:40:55.463920 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:40:55.469043 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 29 00:40:55.465922 ignition[900]: disks: disks passed Oct 29 00:40:55.471944 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 29 00:40:55.465988 ignition[900]: Ignition finished successfully Oct 29 00:40:55.475314 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 29 00:40:55.478554 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 00:40:55.482122 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 00:40:55.483119 systemd[1]: Reached target basic.target - Basic System. Oct 29 00:40:55.484497 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 29 00:40:55.536839 systemd-fsck[910]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 29 00:40:55.544950 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 29 00:40:55.551588 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 29 00:40:55.681654 kernel: EXT4-fs (vda9): mounted filesystem ef53721c-fae5-4ad9-8976-8181c84bc175 r/w with ordered data mode. Quota mode: none. Oct 29 00:40:55.682567 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 29 00:40:55.685933 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 29 00:40:55.691481 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 00:40:55.693253 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 29 00:40:55.695397 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 29 00:40:55.695444 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 00:40:55.695476 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 00:40:55.709037 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 29 00:40:55.712087 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 29 00:40:55.717824 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (919) Oct 29 00:40:55.721142 kernel: BTRFS info (device vda6): first mount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:40:55.721176 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:40:55.725477 kernel: BTRFS info (device vda6): turning on async discard Oct 29 00:40:55.725508 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 00:40:55.727060 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 00:40:55.779141 initrd-setup-root[943]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 00:40:55.783914 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Oct 29 00:40:55.789736 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 00:40:55.794948 initrd-setup-root[964]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 00:40:55.902403 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 29 00:40:55.906564 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 29 00:40:55.909352 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 29 00:40:55.939237 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 29 00:40:55.941719 kernel: BTRFS info (device vda6): last unmount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:40:55.961792 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 29 00:40:56.113326 ignition[1033]: INFO : Ignition 2.22.0 Oct 29 00:40:56.113326 ignition[1033]: INFO : Stage: mount Oct 29 00:40:56.115986 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:40:56.115986 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:40:56.115986 ignition[1033]: INFO : mount: mount passed Oct 29 00:40:56.115986 ignition[1033]: INFO : Ignition finished successfully Oct 29 00:40:56.124649 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 29 00:40:56.127833 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 29 00:40:56.155756 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 00:40:56.184828 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1045) Oct 29 00:40:56.184860 kernel: BTRFS info (device vda6): first mount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:40:56.184872 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:40:56.190120 kernel: BTRFS info (device vda6): turning on async discard Oct 29 00:40:56.190140 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 00:40:56.191830 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 00:40:56.244174 ignition[1062]: INFO : Ignition 2.22.0 Oct 29 00:40:56.244174 ignition[1062]: INFO : Stage: files Oct 29 00:40:56.246829 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:40:56.246829 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:40:56.246829 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Oct 29 00:40:56.252257 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 00:40:56.252257 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 00:40:56.260232 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 00:40:56.262581 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 00:40:56.262581 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 00:40:56.261813 unknown[1062]: wrote ssh authorized keys file for user: core Oct 29 00:40:56.268842 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 29 00:40:56.268842 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 29 00:40:56.307165 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 29 00:40:56.436886 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 29 00:40:56.436886 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 29 00:40:56.443816 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 00:40:56.443816 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 00:40:56.443816 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 00:40:56.443816 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 00:40:56.443816 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 00:40:56.443816 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 00:40:56.443816 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 00:40:56.464632 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 00:40:56.464632 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 00:40:56.464632 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 00:40:56.464632 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 00:40:56.464632 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 00:40:56.464632 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 29 00:40:56.752800 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 29 00:40:57.413102 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 00:40:57.413102 ignition[1062]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 29 00:40:57.419630 ignition[1062]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 00:40:57.427186 ignition[1062]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 00:40:57.427186 ignition[1062]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 29 00:40:57.427186 ignition[1062]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 29 00:40:57.435252 ignition[1062]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 00:40:57.435252 ignition[1062]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 00:40:57.435252 ignition[1062]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 29 00:40:57.435252 ignition[1062]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 29 00:40:57.453544 ignition[1062]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 00:40:57.459161 ignition[1062]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 00:40:57.461817 ignition[1062]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 29 00:40:57.461817 ignition[1062]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 29 00:40:57.461817 ignition[1062]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 00:40:57.461817 ignition[1062]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 00:40:57.461817 ignition[1062]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 00:40:57.461817 ignition[1062]: INFO : files: files passed Oct 29 00:40:57.461817 ignition[1062]: INFO : Ignition finished successfully Oct 29 00:40:57.478260 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 29 00:40:57.483597 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 29 00:40:57.488772 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 29 00:40:57.509394 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 00:40:57.509540 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 29 00:40:57.518178 initrd-setup-root-after-ignition[1092]: grep: /sysroot/oem/oem-release: No such file or directory Oct 29 00:40:57.523993 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:40:57.523993 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:40:57.529192 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:40:57.533878 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 00:40:57.536288 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 29 00:40:57.541826 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 29 00:40:57.607144 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 00:40:57.607289 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 29 00:40:57.608835 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 29 00:40:57.613590 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 29 00:40:57.617571 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 29 00:40:57.618619 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 29 00:40:57.640989 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 00:40:57.643510 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 29 00:40:57.679391 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 00:40:57.679530 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 29 00:40:57.680474 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 00:40:57.685856 systemd[1]: Stopped target timers.target - Timer Units. Oct 29 00:40:57.686390 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 00:40:57.686506 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 00:40:57.695379 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 29 00:40:57.698890 systemd[1]: Stopped target basic.target - Basic System. Oct 29 00:40:57.700064 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 29 00:40:57.703810 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 00:40:57.704361 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 29 00:40:57.711186 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 29 00:40:57.714414 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 29 00:40:57.718238 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 00:40:57.721231 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 29 00:40:57.722105 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 29 00:40:57.728432 systemd[1]: Stopped target swap.target - Swaps. Oct 29 00:40:57.731749 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 00:40:57.731876 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 29 00:40:57.736855 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 29 00:40:57.738030 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 00:40:57.742368 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 29 00:40:57.742652 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 00:40:57.746222 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 00:40:57.746350 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 29 00:40:57.753078 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 00:40:57.753202 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 00:40:57.756576 systemd[1]: Stopped target paths.target - Path Units. Oct 29 00:40:57.757374 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 00:40:57.760724 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 00:40:57.761727 systemd[1]: Stopped target slices.target - Slice Units. Oct 29 00:40:57.765458 systemd[1]: Stopped target sockets.target - Socket Units. Oct 29 00:40:57.768338 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 00:40:57.768431 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 00:40:57.771496 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 00:40:57.771580 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 00:40:57.775537 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 00:40:57.775692 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 00:40:57.780302 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 00:40:57.780423 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 29 00:40:57.784478 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 29 00:40:57.786363 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 00:40:57.786491 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 00:40:57.805297 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 29 00:40:57.806774 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 29 00:40:57.806984 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 00:40:57.808137 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 00:40:57.808297 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 00:40:57.812561 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 00:40:57.812740 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 00:40:57.824503 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 00:40:57.828671 ignition[1118]: INFO : Ignition 2.22.0 Oct 29 00:40:57.828671 ignition[1118]: INFO : Stage: umount Oct 29 00:40:57.828671 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:40:57.828671 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:40:57.828671 ignition[1118]: INFO : umount: umount passed Oct 29 00:40:57.828671 ignition[1118]: INFO : Ignition finished successfully Oct 29 00:40:57.824632 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 29 00:40:57.832118 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 00:40:57.832285 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 29 00:40:57.836264 systemd[1]: Stopped target network.target - Network. Oct 29 00:40:57.836914 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 00:40:57.836971 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 29 00:40:57.837216 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 00:40:57.837264 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 29 00:40:57.837503 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 00:40:57.837556 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 29 00:40:57.838069 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 29 00:40:57.838115 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 29 00:40:57.838430 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 29 00:40:57.839015 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 29 00:40:57.849358 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 00:40:57.849499 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 29 00:40:57.858175 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 00:40:57.878709 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 00:40:57.878974 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 29 00:40:57.886889 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 29 00:40:57.890772 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 00:40:57.890838 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 29 00:40:57.896241 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 29 00:40:57.897185 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 00:40:57.897285 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 00:40:57.900070 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 00:40:57.900145 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 29 00:40:57.900581 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 00:40:57.900647 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 29 00:40:57.901207 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 00:40:57.931458 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 00:40:57.931686 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 29 00:40:57.932893 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 00:40:57.932985 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 29 00:40:57.943254 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 00:40:57.943471 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 00:40:57.946061 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 00:40:57.946171 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 29 00:40:57.947037 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 00:40:57.947077 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 00:40:57.952343 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 00:40:57.952416 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 29 00:40:57.953971 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 00:40:57.954041 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 29 00:40:57.961828 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 00:40:57.961890 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 00:40:57.968561 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 29 00:40:57.969590 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 29 00:40:57.969674 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 00:40:57.974328 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 29 00:40:57.974385 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 00:40:57.977917 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:40:57.977972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:40:57.997650 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 00:40:57.997784 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 29 00:40:58.026406 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 00:40:58.026547 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 29 00:40:58.029858 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 29 00:40:58.031266 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 29 00:40:58.045430 systemd[1]: Switching root. Oct 29 00:40:58.080475 systemd-journald[311]: Journal stopped Oct 29 00:40:59.644233 systemd-journald[311]: Received SIGTERM from PID 1 (systemd). Oct 29 00:40:59.644314 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 00:40:59.644344 kernel: SELinux: policy capability open_perms=1 Oct 29 00:40:59.644357 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 00:40:59.644374 kernel: SELinux: policy capability always_check_network=0 Oct 29 00:40:59.644387 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 00:40:59.644400 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 00:40:59.644417 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 00:40:59.644437 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 00:40:59.644456 kernel: SELinux: policy capability userspace_initial_context=0 Oct 29 00:40:59.644469 kernel: audit: type=1403 audit(1761698458.650:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 00:40:59.644487 systemd[1]: Successfully loaded SELinux policy in 75.867ms. Oct 29 00:40:59.644508 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.374ms. Oct 29 00:40:59.644523 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 00:40:59.644536 systemd[1]: Detected virtualization kvm. Oct 29 00:40:59.644549 systemd[1]: Detected architecture x86-64. Oct 29 00:40:59.644570 systemd[1]: Detected first boot. Oct 29 00:40:59.644583 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 00:40:59.644596 zram_generator::config[1164]: No configuration found. Oct 29 00:40:59.644625 kernel: Guest personality initialized and is inactive Oct 29 00:40:59.644637 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 29 00:40:59.644654 kernel: Initialized host personality Oct 29 00:40:59.644666 kernel: NET: Registered PF_VSOCK protocol family Oct 29 00:40:59.644686 systemd[1]: Populated /etc with preset unit settings. Oct 29 00:40:59.644700 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 29 00:40:59.644712 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 29 00:40:59.644732 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 29 00:40:59.644745 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 29 00:40:59.644758 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 29 00:40:59.644779 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 29 00:40:59.644800 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 29 00:40:59.644814 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 29 00:40:59.644827 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 29 00:40:59.644840 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 29 00:40:59.644856 systemd[1]: Created slice user.slice - User and Session Slice. Oct 29 00:40:59.644869 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 00:40:59.644882 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 00:40:59.644902 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 29 00:40:59.644916 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 29 00:40:59.644930 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 29 00:40:59.644944 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 00:40:59.644957 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 29 00:40:59.644977 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 00:40:59.644991 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 00:40:59.645004 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 29 00:40:59.645017 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 29 00:40:59.645030 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 29 00:40:59.645043 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 29 00:40:59.645056 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 00:40:59.645069 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 00:40:59.645104 systemd[1]: Reached target slices.target - Slice Units. Oct 29 00:40:59.645121 systemd[1]: Reached target swap.target - Swaps. Oct 29 00:40:59.645133 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 29 00:40:59.645146 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 29 00:40:59.645160 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 29 00:40:59.645173 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 00:40:59.645188 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 00:40:59.645211 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 00:40:59.645224 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 29 00:40:59.645237 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 29 00:40:59.645250 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 29 00:40:59.645272 systemd[1]: Mounting media.mount - External Media Directory... Oct 29 00:40:59.645285 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:40:59.645299 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 29 00:40:59.645319 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 29 00:40:59.645332 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 29 00:40:59.645348 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 00:40:59.645361 systemd[1]: Reached target machines.target - Containers. Oct 29 00:40:59.645374 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 29 00:40:59.645390 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:40:59.645402 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 00:40:59.645429 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 29 00:40:59.645442 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:40:59.645455 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 00:40:59.645468 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:40:59.645480 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 29 00:40:59.645493 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:40:59.645506 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 00:40:59.645527 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 29 00:40:59.645540 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 29 00:40:59.645553 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 29 00:40:59.645566 systemd[1]: Stopped systemd-fsck-usr.service. Oct 29 00:40:59.645580 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:40:59.645593 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 00:40:59.645635 kernel: fuse: init (API version 7.41) Oct 29 00:40:59.645650 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 00:40:59.645663 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 00:40:59.645675 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 29 00:40:59.645688 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 29 00:40:59.645701 kernel: ACPI: bus type drm_connector registered Oct 29 00:40:59.645713 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 00:40:59.645735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:40:59.645754 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 29 00:40:59.645767 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 29 00:40:59.645780 systemd[1]: Mounted media.mount - External Media Directory. Oct 29 00:40:59.645800 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 29 00:40:59.645813 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 29 00:40:59.645825 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 29 00:40:59.645858 systemd-journald[1239]: Collecting audit messages is disabled. Oct 29 00:40:59.645881 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 29 00:40:59.645894 systemd-journald[1239]: Journal started Oct 29 00:40:59.645925 systemd-journald[1239]: Runtime Journal (/run/log/journal/02d8598a616846248c84e8faf08059c9) is 5.9M, max 47.9M, 41.9M free. Oct 29 00:40:59.309910 systemd[1]: Queued start job for default target multi-user.target. Oct 29 00:40:59.329892 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 29 00:40:59.330469 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 29 00:40:59.647870 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 00:40:59.649689 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 00:40:59.650367 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 00:40:59.650618 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 29 00:40:59.651497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:40:59.651758 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:40:59.652341 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 00:40:59.652587 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 00:40:59.653553 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:40:59.653808 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:40:59.654566 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 00:40:59.654826 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 29 00:40:59.655427 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:40:59.655789 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:40:59.656731 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 00:40:59.668427 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 00:40:59.672134 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 29 00:40:59.690876 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 00:40:59.693791 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 29 00:40:59.697463 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 29 00:40:59.700580 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 29 00:40:59.702446 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 00:40:59.702589 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 00:40:59.705291 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 29 00:40:59.707759 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:40:59.721494 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 29 00:40:59.724587 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 29 00:40:59.727039 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:40:59.728499 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 29 00:40:59.730497 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 00:40:59.732779 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 00:40:59.740428 systemd-journald[1239]: Time spent on flushing to /var/log/journal/02d8598a616846248c84e8faf08059c9 is 18.360ms for 1029 entries. Oct 29 00:40:59.740428 systemd-journald[1239]: System Journal (/var/log/journal/02d8598a616846248c84e8faf08059c9) is 8M, max 163.5M, 155.5M free. Oct 29 00:40:59.784041 systemd-journald[1239]: Received client request to flush runtime journal. Oct 29 00:40:59.784096 kernel: loop1: detected capacity change from 0 to 224512 Oct 29 00:40:59.737166 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 29 00:40:59.742227 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 29 00:40:59.746970 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 29 00:40:59.751483 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 00:40:59.756524 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 29 00:40:59.759519 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 29 00:40:59.761924 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 29 00:40:59.772497 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 29 00:40:59.776533 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 29 00:40:59.779479 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 00:40:59.787988 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 29 00:40:59.799454 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 29 00:40:59.802692 kernel: loop2: detected capacity change from 0 to 110976 Oct 29 00:40:59.805062 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 00:40:59.808429 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 00:40:59.821815 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 29 00:40:59.828118 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 29 00:40:59.839486 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Oct 29 00:40:59.839506 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Oct 29 00:40:59.846299 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 00:40:59.849809 kernel: loop3: detected capacity change from 0 to 128048 Oct 29 00:40:59.878643 kernel: loop4: detected capacity change from 0 to 224512 Oct 29 00:40:59.882868 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 29 00:40:59.890636 kernel: loop5: detected capacity change from 0 to 110976 Oct 29 00:40:59.899671 kernel: loop6: detected capacity change from 0 to 128048 Oct 29 00:40:59.911024 (sd-merge)[1307]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 29 00:40:59.916551 (sd-merge)[1307]: Merged extensions into '/usr'. Oct 29 00:40:59.921452 systemd[1]: Reload requested from client PID 1282 ('systemd-sysext') (unit systemd-sysext.service)... Oct 29 00:40:59.921551 systemd[1]: Reloading... Oct 29 00:40:59.958925 systemd-resolved[1298]: Positive Trust Anchors: Oct 29 00:40:59.958948 systemd-resolved[1298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 00:40:59.958953 systemd-resolved[1298]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 00:40:59.958985 systemd-resolved[1298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 00:40:59.969077 systemd-resolved[1298]: Defaulting to hostname 'linux'. Oct 29 00:40:59.998121 zram_generator::config[1347]: No configuration found. Oct 29 00:41:00.198726 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 00:41:00.198829 systemd[1]: Reloading finished in 276 ms. Oct 29 00:41:00.225269 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 00:41:00.227732 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 29 00:41:00.232868 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 00:41:00.255123 systemd[1]: Starting ensure-sysext.service... Oct 29 00:41:00.257938 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 00:41:00.271960 systemd[1]: Reload requested from client PID 1377 ('systemctl') (unit ensure-sysext.service)... Oct 29 00:41:00.271985 systemd[1]: Reloading... Oct 29 00:41:00.282795 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 29 00:41:00.282832 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 29 00:41:00.283169 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 00:41:00.283489 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 29 00:41:00.284468 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 00:41:00.284767 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Oct 29 00:41:00.284842 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Oct 29 00:41:00.291010 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 00:41:00.291022 systemd-tmpfiles[1378]: Skipping /boot Oct 29 00:41:00.302072 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 00:41:00.302086 systemd-tmpfiles[1378]: Skipping /boot Oct 29 00:41:00.359652 zram_generator::config[1411]: No configuration found. Oct 29 00:41:00.633161 systemd[1]: Reloading finished in 360 ms. Oct 29 00:41:00.658553 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 29 00:41:00.681268 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 00:41:00.693793 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 00:41:00.696685 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 29 00:41:00.718140 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 29 00:41:00.721880 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 29 00:41:00.727826 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 00:41:00.731915 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 29 00:41:00.739411 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:41:00.739708 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:41:00.749330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:41:00.755548 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:41:00.759713 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:41:00.764547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:41:00.764855 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:41:00.765237 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:41:00.770108 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 29 00:41:00.773100 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:41:00.773341 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:41:00.779639 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:41:00.779877 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:41:00.782510 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:41:00.782758 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:41:00.789009 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:41:00.789314 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 00:41:00.792472 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 29 00:41:00.794595 augenrules[1479]: No rules Oct 29 00:41:00.795464 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 00:41:00.795595 systemd-udevd[1452]: Using default interface naming scheme 'v257'. Oct 29 00:41:00.795786 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 00:41:00.802804 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:41:00.802978 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:41:00.804459 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:41:00.809814 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:41:00.820543 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:41:00.822366 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:41:00.822508 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:41:00.822599 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:41:00.823942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:41:00.824165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:41:00.827383 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:41:00.827594 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:41:00.830369 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:41:00.830643 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:41:00.838970 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 29 00:41:00.841329 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 00:41:00.849081 systemd[1]: Finished ensure-sysext.service. Oct 29 00:41:00.851802 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:41:00.856039 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 00:41:00.860271 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:41:00.868203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:41:00.872675 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 00:41:00.876919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:41:00.879992 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:41:00.881867 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:41:00.881913 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:41:00.892981 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 00:41:00.896796 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 29 00:41:00.898716 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 00:41:00.898754 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:41:00.902687 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:41:00.902931 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:41:00.906089 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 00:41:00.906323 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 00:41:00.908844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:41:00.909705 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:41:00.912984 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:41:00.913213 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:41:00.930489 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 29 00:41:00.932418 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:41:00.932469 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 00:41:00.936290 augenrules[1503]: /sbin/augenrules: No change Oct 29 00:41:00.950534 augenrules[1546]: No rules Oct 29 00:41:00.951793 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 00:41:00.952194 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 00:41:01.004208 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 00:41:01.008334 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 29 00:41:01.107442 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 29 00:41:01.108180 systemd-networkd[1520]: lo: Link UP Oct 29 00:41:01.108451 systemd-networkd[1520]: lo: Gained carrier Oct 29 00:41:01.110114 systemd[1]: Reached target time-set.target - System Time Set. Oct 29 00:41:01.110552 systemd-networkd[1520]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:41:01.110556 systemd-networkd[1520]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 00:41:01.112298 systemd-networkd[1520]: eth0: Link UP Oct 29 00:41:01.112518 systemd-networkd[1520]: eth0: Gained carrier Oct 29 00:41:01.112538 systemd-networkd[1520]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:41:01.113097 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 00:41:01.115508 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 29 00:41:01.119014 systemd[1]: Reached target network.target - Network. Oct 29 00:41:01.123798 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 29 00:41:01.128917 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 29 00:41:01.131929 systemd-networkd[1520]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 00:41:01.133264 systemd-timesyncd[1522]: Network configuration changed, trying to establish connection. Oct 29 00:41:01.600064 systemd-timesyncd[1522]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 29 00:41:01.600184 systemd-timesyncd[1522]: Initial clock synchronization to Wed 2025-10-29 00:41:01.599876 UTC. Oct 29 00:41:01.600831 systemd-resolved[1298]: Clock change detected. Flushing caches. Oct 29 00:41:01.631133 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 29 00:41:01.638710 kernel: mousedev: PS/2 mouse device common for all mice Oct 29 00:41:01.649408 kernel: ACPI: button: Power Button [PWRF] Oct 29 00:41:01.655423 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 29 00:41:01.655803 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 29 00:41:01.656051 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 29 00:41:01.690139 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 29 00:41:01.765792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:41:01.774587 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:41:01.774967 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:41:01.780894 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:41:01.825663 kernel: kvm_amd: TSC scaling supported Oct 29 00:41:01.825745 kernel: kvm_amd: Nested Virtualization enabled Oct 29 00:41:01.825772 kernel: kvm_amd: Nested Paging enabled Oct 29 00:41:01.827747 kernel: kvm_amd: LBR virtualization supported Oct 29 00:41:01.828089 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 29 00:41:01.828439 kernel: kvm_amd: Virtual GIF supported Oct 29 00:41:01.991441 kernel: EDAC MC: Ver: 3.0.0 Oct 29 00:41:02.014908 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:41:02.151603 ldconfig[1449]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 00:41:02.159238 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 29 00:41:02.163093 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 29 00:41:02.199298 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 29 00:41:02.201396 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 00:41:02.203290 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 29 00:41:02.205441 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 29 00:41:02.207543 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 29 00:41:02.209627 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 29 00:41:02.211557 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 29 00:41:02.213781 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 29 00:41:02.215878 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 00:41:02.215913 systemd[1]: Reached target paths.target - Path Units. Oct 29 00:41:02.217440 systemd[1]: Reached target timers.target - Timer Units. Oct 29 00:41:02.220146 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 29 00:41:02.224011 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 29 00:41:02.227918 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 29 00:41:02.230113 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 29 00:41:02.232173 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 29 00:41:02.236911 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 29 00:41:02.238873 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 29 00:41:02.241344 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 29 00:41:02.243841 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 00:41:02.245404 systemd[1]: Reached target basic.target - Basic System. Oct 29 00:41:02.246921 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 29 00:41:02.246962 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 29 00:41:02.248154 systemd[1]: Starting containerd.service - containerd container runtime... Oct 29 00:41:02.250940 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 29 00:41:02.253529 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 29 00:41:02.256472 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 29 00:41:02.266624 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 29 00:41:02.268460 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 29 00:41:02.269411 jq[1599]: false Oct 29 00:41:02.270414 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 29 00:41:02.273688 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 29 00:41:02.277560 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 29 00:41:02.281647 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Refreshing passwd entry cache Oct 29 00:41:02.281503 oslogin_cache_refresh[1601]: Refreshing passwd entry cache Oct 29 00:41:02.282322 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 29 00:41:02.286629 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 29 00:41:02.290425 extend-filesystems[1600]: Found /dev/vda6 Oct 29 00:41:02.292497 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Failure getting users, quitting Oct 29 00:41:02.292497 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 00:41:02.292497 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Refreshing group entry cache Oct 29 00:41:02.292033 oslogin_cache_refresh[1601]: Failure getting users, quitting Oct 29 00:41:02.292057 oslogin_cache_refresh[1601]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 00:41:02.292113 oslogin_cache_refresh[1601]: Refreshing group entry cache Oct 29 00:41:02.294283 extend-filesystems[1600]: Found /dev/vda9 Oct 29 00:41:02.296648 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 29 00:41:02.297677 extend-filesystems[1600]: Checking size of /dev/vda9 Oct 29 00:41:02.298291 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 00:41:02.298799 oslogin_cache_refresh[1601]: Failure getting groups, quitting Oct 29 00:41:02.301894 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Failure getting groups, quitting Oct 29 00:41:02.301894 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 00:41:02.298859 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 29 00:41:02.298809 oslogin_cache_refresh[1601]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 00:41:02.300564 systemd[1]: Starting update-engine.service - Update Engine... Oct 29 00:41:02.305478 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 29 00:41:02.313974 jq[1619]: true Oct 29 00:41:02.314274 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 29 00:41:02.316774 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 00:41:02.317042 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 29 00:41:02.317397 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 29 00:41:02.318200 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 29 00:41:02.318848 extend-filesystems[1600]: Resized partition /dev/vda9 Oct 29 00:41:02.320054 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 00:41:02.320315 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 29 00:41:02.326027 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 00:41:02.326413 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 29 00:41:02.328468 extend-filesystems[1630]: resize2fs 1.47.3 (8-Jul-2025) Oct 29 00:41:02.341487 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 29 00:41:02.347711 (ntainerd)[1636]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 29 00:41:02.355811 update_engine[1618]: I20251029 00:41:02.354911 1618 main.cc:92] Flatcar Update Engine starting Oct 29 00:41:02.373492 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 29 00:41:02.397322 jq[1635]: true Oct 29 00:41:02.403593 extend-filesystems[1630]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 29 00:41:02.403593 extend-filesystems[1630]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 29 00:41:02.403593 extend-filesystems[1630]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 29 00:41:02.409723 extend-filesystems[1600]: Resized filesystem in /dev/vda9 Oct 29 00:41:02.408902 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 00:41:02.409215 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 29 00:41:02.421238 tar[1633]: linux-amd64/LICENSE Oct 29 00:41:02.420706 systemd-logind[1612]: Watching system buttons on /dev/input/event2 (Power Button) Oct 29 00:41:02.422056 tar[1633]: linux-amd64/helm Oct 29 00:41:02.420728 systemd-logind[1612]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 29 00:41:02.421921 systemd-logind[1612]: New seat seat0. Oct 29 00:41:02.424184 systemd[1]: Started systemd-logind.service - User Login Management. Oct 29 00:41:02.457589 dbus-daemon[1597]: [system] SELinux support is enabled Oct 29 00:41:02.457924 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 29 00:41:02.462192 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 00:41:02.466605 update_engine[1618]: I20251029 00:41:02.462702 1618 update_check_scheduler.cc:74] Next update check in 8m20s Oct 29 00:41:02.462787 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 29 00:41:02.465479 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 00:41:02.465500 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 29 00:41:02.472741 systemd[1]: Started update-engine.service - Update Engine. Oct 29 00:41:02.473238 dbus-daemon[1597]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 29 00:41:02.477807 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 29 00:41:02.490060 bash[1666]: Updated "/home/core/.ssh/authorized_keys" Oct 29 00:41:02.495246 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 29 00:41:02.498398 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 29 00:41:02.504726 sshd_keygen[1626]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 00:41:02.617820 locksmithd[1667]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 00:41:02.618490 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 29 00:41:02.622616 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 29 00:41:02.643817 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 00:41:02.644103 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 29 00:41:02.648535 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 29 00:41:02.659490 systemd-networkd[1520]: eth0: Gained IPv6LL Oct 29 00:41:02.663157 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 29 00:41:02.666000 systemd[1]: Reached target network-online.target - Network is Online. Oct 29 00:41:02.670576 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 29 00:41:02.675422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:41:02.681757 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 29 00:41:02.685428 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 29 00:41:02.838023 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 29 00:41:02.844047 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 29 00:41:02.846050 systemd[1]: Reached target getty.target - Login Prompts. Oct 29 00:41:02.880230 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 29 00:41:02.885432 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 29 00:41:02.885803 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 29 00:41:02.888685 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 29 00:41:02.946117 containerd[1636]: time="2025-10-29T00:41:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 29 00:41:02.947036 containerd[1636]: time="2025-10-29T00:41:02.947008364Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 29 00:41:02.963586 containerd[1636]: time="2025-10-29T00:41:02.962754874Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.159µs" Oct 29 00:41:02.963586 containerd[1636]: time="2025-10-29T00:41:02.962813785Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 29 00:41:02.963586 containerd[1636]: time="2025-10-29T00:41:02.962837018Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 29 00:41:02.963586 containerd[1636]: time="2025-10-29T00:41:02.963195450Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 29 00:41:02.963586 containerd[1636]: time="2025-10-29T00:41:02.963217161Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 29 00:41:02.963586 containerd[1636]: time="2025-10-29T00:41:02.963256805Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 00:41:02.963586 containerd[1636]: time="2025-10-29T00:41:02.963459896Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 00:41:02.963586 containerd[1636]: time="2025-10-29T00:41:02.963477820Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 00:41:02.963871 containerd[1636]: time="2025-10-29T00:41:02.963841271Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 00:41:02.963871 containerd[1636]: time="2025-10-29T00:41:02.963866419Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 00:41:02.963912 containerd[1636]: time="2025-10-29T00:41:02.963897627Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 00:41:02.963943 containerd[1636]: time="2025-10-29T00:41:02.963923105Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 29 00:41:02.964121 containerd[1636]: time="2025-10-29T00:41:02.964096360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 29 00:41:02.964485 containerd[1636]: time="2025-10-29T00:41:02.964460423Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 00:41:02.964616 containerd[1636]: time="2025-10-29T00:41:02.964509915Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 00:41:02.964616 containerd[1636]: time="2025-10-29T00:41:02.964607809Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 29 00:41:02.964696 containerd[1636]: time="2025-10-29T00:41:02.964676037Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 29 00:41:02.965103 containerd[1636]: time="2025-10-29T00:41:02.965077079Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 29 00:41:02.965194 containerd[1636]: time="2025-10-29T00:41:02.965175393Z" level=info msg="metadata content store policy set" policy=shared Oct 29 00:41:02.982026 containerd[1636]: time="2025-10-29T00:41:02.981929954Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 29 00:41:02.982109 containerd[1636]: time="2025-10-29T00:41:02.982077691Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 29 00:41:02.982109 containerd[1636]: time="2025-10-29T00:41:02.982105954Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 29 00:41:02.982149 containerd[1636]: time="2025-10-29T00:41:02.982122756Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 29 00:41:02.982449 containerd[1636]: time="2025-10-29T00:41:02.982420314Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 29 00:41:02.982449 containerd[1636]: time="2025-10-29T00:41:02.982449278Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 29 00:41:02.982514 containerd[1636]: time="2025-10-29T00:41:02.982476138Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 29 00:41:02.982514 containerd[1636]: time="2025-10-29T00:41:02.982505644Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 29 00:41:02.982575 containerd[1636]: time="2025-10-29T00:41:02.982518708Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 29 00:41:02.982575 containerd[1636]: time="2025-10-29T00:41:02.982530210Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 29 00:41:02.982575 containerd[1636]: time="2025-10-29T00:41:02.982539978Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 29 00:41:02.982575 containerd[1636]: time="2025-10-29T00:41:02.982556249Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 29 00:41:02.982841 containerd[1636]: time="2025-10-29T00:41:02.982807600Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 29 00:41:02.982889 containerd[1636]: time="2025-10-29T00:41:02.982867011Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 29 00:41:02.982927 containerd[1636]: time="2025-10-29T00:41:02.982895224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 29 00:41:02.982927 containerd[1636]: time="2025-10-29T00:41:02.982918348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 29 00:41:02.982975 containerd[1636]: time="2025-10-29T00:41:02.982940770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 29 00:41:02.982975 containerd[1636]: time="2025-10-29T00:41:02.982961970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 29 00:41:02.982975 containerd[1636]: time="2025-10-29T00:41:02.982973872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 29 00:41:02.983050 containerd[1636]: time="2025-10-29T00:41:02.982988389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 29 00:41:02.983050 containerd[1636]: time="2025-10-29T00:41:02.983014438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 29 00:41:02.983050 containerd[1636]: time="2025-10-29T00:41:02.983028424Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 29 00:41:02.983050 containerd[1636]: time="2025-10-29T00:41:02.983043132Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 29 00:41:02.983217 containerd[1636]: time="2025-10-29T00:41:02.983189035Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 29 00:41:02.983263 containerd[1636]: time="2025-10-29T00:41:02.983236324Z" level=info msg="Start snapshots syncer" Oct 29 00:41:02.983310 containerd[1636]: time="2025-10-29T00:41:02.983291327Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 29 00:41:02.983720 containerd[1636]: time="2025-10-29T00:41:02.983645151Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 29 00:41:02.984233 containerd[1636]: time="2025-10-29T00:41:02.983761689Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 29 00:41:02.984233 containerd[1636]: time="2025-10-29T00:41:02.983852640Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 29 00:41:02.984233 containerd[1636]: time="2025-10-29T00:41:02.983972675Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 29 00:41:02.984233 containerd[1636]: time="2025-10-29T00:41:02.984006919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 29 00:41:02.984233 containerd[1636]: time="2025-10-29T00:41:02.984018902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 29 00:41:02.984233 containerd[1636]: time="2025-10-29T00:41:02.984032708Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 29 00:41:02.984233 containerd[1636]: time="2025-10-29T00:41:02.984049920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 29 00:41:02.984233 containerd[1636]: time="2025-10-29T00:41:02.984061241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 29 00:41:02.984233 containerd[1636]: time="2025-10-29T00:41:02.984079796Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 29 00:41:02.984233 containerd[1636]: time="2025-10-29T00:41:02.984151500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 29 00:41:02.984233 containerd[1636]: time="2025-10-29T00:41:02.984175455Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 29 00:41:02.984233 containerd[1636]: time="2025-10-29T00:41:02.984198649Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 29 00:41:02.984233 containerd[1636]: time="2025-10-29T00:41:02.984232232Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 00:41:02.984609 containerd[1636]: time="2025-10-29T00:41:02.984254383Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 00:41:02.984609 containerd[1636]: time="2025-10-29T00:41:02.984264402Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 00:41:02.984609 containerd[1636]: time="2025-10-29T00:41:02.984275232Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 00:41:02.984609 containerd[1636]: time="2025-10-29T00:41:02.984283748Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 29 00:41:02.984609 containerd[1636]: time="2025-10-29T00:41:02.984293817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 29 00:41:02.984609 containerd[1636]: time="2025-10-29T00:41:02.984307303Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 29 00:41:02.984609 containerd[1636]: time="2025-10-29T00:41:02.984344192Z" level=info msg="runtime interface created" Oct 29 00:41:02.984609 containerd[1636]: time="2025-10-29T00:41:02.984356795Z" level=info msg="created NRI interface" Oct 29 00:41:02.984787 containerd[1636]: time="2025-10-29T00:41:02.984376723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 29 00:41:02.984787 containerd[1636]: time="2025-10-29T00:41:02.984689299Z" level=info msg="Connect containerd service" Oct 29 00:41:02.984875 containerd[1636]: time="2025-10-29T00:41:02.984819854Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 29 00:41:02.986242 containerd[1636]: time="2025-10-29T00:41:02.986203709Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 00:41:03.240799 tar[1633]: linux-amd64/README.md Oct 29 00:41:03.260344 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 29 00:41:03.274607 containerd[1636]: time="2025-10-29T00:41:03.274495319Z" level=info msg="Start subscribing containerd event" Oct 29 00:41:03.274845 containerd[1636]: time="2025-10-29T00:41:03.274692108Z" level=info msg="Start recovering state" Oct 29 00:41:03.275097 containerd[1636]: time="2025-10-29T00:41:03.275071249Z" level=info msg="Start event monitor" Oct 29 00:41:03.275151 containerd[1636]: time="2025-10-29T00:41:03.275133976Z" level=info msg="Start cni network conf syncer for default" Oct 29 00:41:03.275243 containerd[1636]: time="2025-10-29T00:41:03.275148403Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 00:41:03.275270 containerd[1636]: time="2025-10-29T00:41:03.275245195Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 00:41:03.275319 containerd[1636]: time="2025-10-29T00:41:03.275152711Z" level=info msg="Start streaming server" Oct 29 00:41:03.275350 containerd[1636]: time="2025-10-29T00:41:03.275316919Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 29 00:41:03.275350 containerd[1636]: time="2025-10-29T00:41:03.275333110Z" level=info msg="runtime interface starting up..." Oct 29 00:41:03.275350 containerd[1636]: time="2025-10-29T00:41:03.275341846Z" level=info msg="starting plugins..." Oct 29 00:41:03.275470 containerd[1636]: time="2025-10-29T00:41:03.275366172Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 29 00:41:03.275635 containerd[1636]: time="2025-10-29T00:41:03.275610951Z" level=info msg="containerd successfully booted in 0.330486s" Oct 29 00:41:03.275829 systemd[1]: Started containerd.service - containerd container runtime. Oct 29 00:41:03.874792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:03.877253 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 29 00:41:03.879356 systemd[1]: Startup finished in 3.067s (kernel) + 6.457s (initrd) + 4.835s (userspace) = 14.360s. Oct 29 00:41:03.880152 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 00:41:04.297149 kubelet[1738]: E1029 00:41:04.297047 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:41:04.301593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:41:04.301805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:41:04.302253 systemd[1]: kubelet.service: Consumed 1.250s CPU time, 263.4M memory peak. Oct 29 00:41:04.380104 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 29 00:41:04.381634 systemd[1]: Started sshd@0-10.0.0.76:22-10.0.0.1:41780.service - OpenSSH per-connection server daemon (10.0.0.1:41780). Oct 29 00:41:04.474539 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 41780 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:41:04.476322 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:04.483137 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 29 00:41:04.484291 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 29 00:41:04.490180 systemd-logind[1612]: New session 1 of user core. Oct 29 00:41:04.518189 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 29 00:41:04.521361 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 29 00:41:04.539780 (systemd)[1756]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:41:04.542188 systemd-logind[1612]: New session c1 of user core. Oct 29 00:41:04.692847 systemd[1756]: Queued start job for default target default.target. Oct 29 00:41:04.708629 systemd[1756]: Created slice app.slice - User Application Slice. Oct 29 00:41:04.708657 systemd[1756]: Reached target paths.target - Paths. Oct 29 00:41:04.708697 systemd[1756]: Reached target timers.target - Timers. Oct 29 00:41:04.710329 systemd[1756]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 29 00:41:04.722311 systemd[1756]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 29 00:41:04.722462 systemd[1756]: Reached target sockets.target - Sockets. Oct 29 00:41:04.722504 systemd[1756]: Reached target basic.target - Basic System. Oct 29 00:41:04.722585 systemd[1756]: Reached target default.target - Main User Target. Oct 29 00:41:04.722624 systemd[1756]: Startup finished in 173ms. Oct 29 00:41:04.723042 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 29 00:41:04.724761 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 29 00:41:04.790489 systemd[1]: Started sshd@1-10.0.0.76:22-10.0.0.1:41786.service - OpenSSH per-connection server daemon (10.0.0.1:41786). Oct 29 00:41:04.851318 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 41786 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:41:04.852807 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:04.857775 systemd-logind[1612]: New session 2 of user core. Oct 29 00:41:04.871513 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 29 00:41:04.926193 sshd[1770]: Connection closed by 10.0.0.1 port 41786 Oct 29 00:41:04.926527 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Oct 29 00:41:04.940209 systemd[1]: sshd@1-10.0.0.76:22-10.0.0.1:41786.service: Deactivated successfully. Oct 29 00:41:04.942175 systemd[1]: session-2.scope: Deactivated successfully. Oct 29 00:41:04.942914 systemd-logind[1612]: Session 2 logged out. Waiting for processes to exit. Oct 29 00:41:04.945865 systemd[1]: Started sshd@2-10.0.0.76:22-10.0.0.1:41788.service - OpenSSH per-connection server daemon (10.0.0.1:41788). Oct 29 00:41:04.946440 systemd-logind[1612]: Removed session 2. Oct 29 00:41:05.005187 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 41788 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:41:05.006827 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:05.011978 systemd-logind[1612]: New session 3 of user core. Oct 29 00:41:05.025615 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 29 00:41:05.084300 sshd[1779]: Connection closed by 10.0.0.1 port 41788 Oct 29 00:41:05.084879 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Oct 29 00:41:05.103283 systemd[1]: sshd@2-10.0.0.76:22-10.0.0.1:41788.service: Deactivated successfully. Oct 29 00:41:05.106416 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 00:41:05.109162 systemd-logind[1612]: Session 3 logged out. Waiting for processes to exit. Oct 29 00:41:05.115285 systemd[1]: Started sshd@3-10.0.0.76:22-10.0.0.1:41794.service - OpenSSH per-connection server daemon (10.0.0.1:41794). Oct 29 00:41:05.116647 systemd-logind[1612]: Removed session 3. Oct 29 00:41:05.185675 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 41794 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:41:05.187844 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:05.196570 systemd-logind[1612]: New session 4 of user core. Oct 29 00:41:05.207679 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 29 00:41:05.274118 sshd[1788]: Connection closed by 10.0.0.1 port 41794 Oct 29 00:41:05.274594 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Oct 29 00:41:05.288162 systemd[1]: sshd@3-10.0.0.76:22-10.0.0.1:41794.service: Deactivated successfully. Oct 29 00:41:05.289967 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 00:41:05.290687 systemd-logind[1612]: Session 4 logged out. Waiting for processes to exit. Oct 29 00:41:05.293588 systemd[1]: Started sshd@4-10.0.0.76:22-10.0.0.1:41804.service - OpenSSH per-connection server daemon (10.0.0.1:41804). Oct 29 00:41:05.294163 systemd-logind[1612]: Removed session 4. Oct 29 00:41:05.360037 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 41804 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:41:05.361307 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:05.365691 systemd-logind[1612]: New session 5 of user core. Oct 29 00:41:05.375523 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 29 00:41:05.438212 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 29 00:41:05.438617 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:41:05.457825 sudo[1798]: pam_unix(sudo:session): session closed for user root Oct 29 00:41:05.459397 sshd[1797]: Connection closed by 10.0.0.1 port 41804 Oct 29 00:41:05.459757 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Oct 29 00:41:05.472127 systemd[1]: sshd@4-10.0.0.76:22-10.0.0.1:41804.service: Deactivated successfully. Oct 29 00:41:05.474071 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 00:41:05.474808 systemd-logind[1612]: Session 5 logged out. Waiting for processes to exit. Oct 29 00:41:05.477757 systemd[1]: Started sshd@5-10.0.0.76:22-10.0.0.1:41806.service - OpenSSH per-connection server daemon (10.0.0.1:41806). Oct 29 00:41:05.478471 systemd-logind[1612]: Removed session 5. Oct 29 00:41:05.532333 sshd[1804]: Accepted publickey for core from 10.0.0.1 port 41806 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:41:05.533676 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:05.537831 systemd-logind[1612]: New session 6 of user core. Oct 29 00:41:05.547516 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 29 00:41:05.601605 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 29 00:41:05.601923 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:41:05.608524 sudo[1809]: pam_unix(sudo:session): session closed for user root Oct 29 00:41:05.615618 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 29 00:41:05.615931 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:41:05.626191 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 00:41:05.678630 augenrules[1831]: No rules Oct 29 00:41:05.680431 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 00:41:05.680726 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 00:41:05.681860 sudo[1808]: pam_unix(sudo:session): session closed for user root Oct 29 00:41:05.683561 sshd[1807]: Connection closed by 10.0.0.1 port 41806 Oct 29 00:41:05.683915 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Oct 29 00:41:05.692700 systemd[1]: sshd@5-10.0.0.76:22-10.0.0.1:41806.service: Deactivated successfully. Oct 29 00:41:05.694519 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 00:41:05.695245 systemd-logind[1612]: Session 6 logged out. Waiting for processes to exit. Oct 29 00:41:05.698164 systemd[1]: Started sshd@6-10.0.0.76:22-10.0.0.1:41810.service - OpenSSH per-connection server daemon (10.0.0.1:41810). Oct 29 00:41:05.698788 systemd-logind[1612]: Removed session 6. Oct 29 00:41:05.751262 sshd[1840]: Accepted publickey for core from 10.0.0.1 port 41810 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:41:05.752498 sshd-session[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:41:05.756846 systemd-logind[1612]: New session 7 of user core. Oct 29 00:41:05.778528 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 29 00:41:05.832614 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 00:41:05.832944 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:41:06.332038 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 29 00:41:06.363726 (dockerd)[1864]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 29 00:41:07.042415 dockerd[1864]: time="2025-10-29T00:41:07.042296871Z" level=info msg="Starting up" Oct 29 00:41:07.043291 dockerd[1864]: time="2025-10-29T00:41:07.043266780Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 29 00:41:07.063268 dockerd[1864]: time="2025-10-29T00:41:07.063219198Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 29 00:41:07.847709 dockerd[1864]: time="2025-10-29T00:41:07.847620881Z" level=info msg="Loading containers: start." Oct 29 00:41:07.862424 kernel: Initializing XFRM netlink socket Oct 29 00:41:08.155047 systemd-networkd[1520]: docker0: Link UP Oct 29 00:41:08.160498 dockerd[1864]: time="2025-10-29T00:41:08.160448715Z" level=info msg="Loading containers: done." Oct 29 00:41:08.178693 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2326957091-merged.mount: Deactivated successfully. Oct 29 00:41:08.181293 dockerd[1864]: time="2025-10-29T00:41:08.181242742Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 00:41:08.181374 dockerd[1864]: time="2025-10-29T00:41:08.181356174Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 29 00:41:08.181526 dockerd[1864]: time="2025-10-29T00:41:08.181499092Z" level=info msg="Initializing buildkit" Oct 29 00:41:08.213795 dockerd[1864]: time="2025-10-29T00:41:08.213724897Z" level=info msg="Completed buildkit initialization" Oct 29 00:41:08.218750 dockerd[1864]: time="2025-10-29T00:41:08.218701029Z" level=info msg="Daemon has completed initialization" Oct 29 00:41:08.218942 dockerd[1864]: time="2025-10-29T00:41:08.218836113Z" level=info msg="API listen on /run/docker.sock" Oct 29 00:41:08.219068 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 29 00:41:09.135696 containerd[1636]: time="2025-10-29T00:41:09.135629227Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 29 00:41:09.655644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1286347238.mount: Deactivated successfully. Oct 29 00:41:10.922060 containerd[1636]: time="2025-10-29T00:41:10.921960195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:10.922559 containerd[1636]: time="2025-10-29T00:41:10.922411431Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Oct 29 00:41:10.923639 containerd[1636]: time="2025-10-29T00:41:10.923603837Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:10.926810 containerd[1636]: time="2025-10-29T00:41:10.926734038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:10.927771 containerd[1636]: time="2025-10-29T00:41:10.927737510Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.792043611s" Oct 29 00:41:10.927839 containerd[1636]: time="2025-10-29T00:41:10.927778466Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 29 00:41:10.928680 containerd[1636]: time="2025-10-29T00:41:10.928648528Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 29 00:41:12.354437 containerd[1636]: time="2025-10-29T00:41:12.354363422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:12.355305 containerd[1636]: time="2025-10-29T00:41:12.355282646Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Oct 29 00:41:12.356718 containerd[1636]: time="2025-10-29T00:41:12.356653567Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:12.359025 containerd[1636]: time="2025-10-29T00:41:12.358991001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:12.359922 containerd[1636]: time="2025-10-29T00:41:12.359870751Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.431190383s" Oct 29 00:41:12.359922 containerd[1636]: time="2025-10-29T00:41:12.359915304Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 29 00:41:12.360432 containerd[1636]: time="2025-10-29T00:41:12.360398420Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 29 00:41:14.467579 containerd[1636]: time="2025-10-29T00:41:14.467501330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:14.468184 containerd[1636]: time="2025-10-29T00:41:14.468157260Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Oct 29 00:41:14.469352 containerd[1636]: time="2025-10-29T00:41:14.469314230Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:14.471707 containerd[1636]: time="2025-10-29T00:41:14.471685878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:14.472541 containerd[1636]: time="2025-10-29T00:41:14.472512107Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.112086656s" Oct 29 00:41:14.472595 containerd[1636]: time="2025-10-29T00:41:14.472542675Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 29 00:41:14.473154 containerd[1636]: time="2025-10-29T00:41:14.473134284Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 29 00:41:14.510205 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 00:41:14.511952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:41:14.809084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:14.813087 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 00:41:14.986223 kubelet[2160]: E1029 00:41:14.986154 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:41:14.993496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:41:14.993766 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:41:14.994416 systemd[1]: kubelet.service: Consumed 313ms CPU time, 111.8M memory peak. Oct 29 00:41:15.998179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount605920563.mount: Deactivated successfully. Oct 29 00:41:16.987800 containerd[1636]: time="2025-10-29T00:41:16.987707677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:16.988668 containerd[1636]: time="2025-10-29T00:41:16.988630507Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Oct 29 00:41:16.989939 containerd[1636]: time="2025-10-29T00:41:16.989882585Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:16.991896 containerd[1636]: time="2025-10-29T00:41:16.991848051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:16.992444 containerd[1636]: time="2025-10-29T00:41:16.992404455Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.519223253s" Oct 29 00:41:16.992495 containerd[1636]: time="2025-10-29T00:41:16.992444079Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 29 00:41:16.993048 containerd[1636]: time="2025-10-29T00:41:16.993014529Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 29 00:41:17.744054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506650388.mount: Deactivated successfully. Oct 29 00:41:18.597674 containerd[1636]: time="2025-10-29T00:41:18.597604107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:18.598602 containerd[1636]: time="2025-10-29T00:41:18.598292087Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Oct 29 00:41:18.599540 containerd[1636]: time="2025-10-29T00:41:18.599473593Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:18.601816 containerd[1636]: time="2025-10-29T00:41:18.601793544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:18.602868 containerd[1636]: time="2025-10-29T00:41:18.602829938Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.609782126s" Oct 29 00:41:18.602868 containerd[1636]: time="2025-10-29T00:41:18.602865374Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 29 00:41:18.603562 containerd[1636]: time="2025-10-29T00:41:18.603534379Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 29 00:41:19.226275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount629955344.mount: Deactivated successfully. Oct 29 00:41:19.231947 containerd[1636]: time="2025-10-29T00:41:19.231898255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:41:19.232880 containerd[1636]: time="2025-10-29T00:41:19.232633103Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 29 00:41:19.233865 containerd[1636]: time="2025-10-29T00:41:19.233838003Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:41:19.236252 containerd[1636]: time="2025-10-29T00:41:19.236188581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:41:19.236996 containerd[1636]: time="2025-10-29T00:41:19.236959597Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 633.394019ms" Oct 29 00:41:19.237062 containerd[1636]: time="2025-10-29T00:41:19.236997037Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 29 00:41:19.237695 containerd[1636]: time="2025-10-29T00:41:19.237661954Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 29 00:41:19.887038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1274553846.mount: Deactivated successfully. Oct 29 00:41:22.449472 containerd[1636]: time="2025-10-29T00:41:22.449366617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:22.450661 containerd[1636]: time="2025-10-29T00:41:22.450629225Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Oct 29 00:41:22.452085 containerd[1636]: time="2025-10-29T00:41:22.452033408Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:22.454607 containerd[1636]: time="2025-10-29T00:41:22.454565857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:22.455646 containerd[1636]: time="2025-10-29T00:41:22.455619243Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.217926331s" Oct 29 00:41:22.455692 containerd[1636]: time="2025-10-29T00:41:22.455646694Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 29 00:41:25.010415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 29 00:41:25.012548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:41:25.274930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:25.293738 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 00:41:25.371127 kubelet[2318]: E1029 00:41:25.371035 2318 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:41:25.375535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:41:25.375782 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:41:25.376273 systemd[1]: kubelet.service: Consumed 282ms CPU time, 110.8M memory peak. Oct 29 00:41:25.461633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:25.461801 systemd[1]: kubelet.service: Consumed 282ms CPU time, 110.8M memory peak. Oct 29 00:41:25.464121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:41:25.488621 systemd[1]: Reload requested from client PID 2331 ('systemctl') (unit session-7.scope)... Oct 29 00:41:25.488650 systemd[1]: Reloading... Oct 29 00:41:25.628478 zram_generator::config[2379]: No configuration found. Oct 29 00:41:26.409713 systemd[1]: Reloading finished in 920 ms. Oct 29 00:41:26.494039 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 29 00:41:26.494162 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 29 00:41:26.494509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:26.494556 systemd[1]: kubelet.service: Consumed 176ms CPU time, 98.2M memory peak. Oct 29 00:41:26.496306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:41:26.709420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:26.719690 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 00:41:26.760053 kubelet[2424]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:41:26.760053 kubelet[2424]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 00:41:26.760053 kubelet[2424]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:41:26.760531 kubelet[2424]: I1029 00:41:26.760116 2424 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 00:41:27.238959 kubelet[2424]: I1029 00:41:27.238910 2424 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 29 00:41:27.238959 kubelet[2424]: I1029 00:41:27.238939 2424 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 00:41:27.239237 kubelet[2424]: I1029 00:41:27.239210 2424 server.go:954] "Client rotation is on, will bootstrap in background" Oct 29 00:41:27.269493 kubelet[2424]: E1029 00:41:27.269438 2424 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:41:27.276790 kubelet[2424]: I1029 00:41:27.276735 2424 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 00:41:27.282724 kubelet[2424]: I1029 00:41:27.282695 2424 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 00:41:27.289020 kubelet[2424]: I1029 00:41:27.288990 2424 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 00:41:27.289784 kubelet[2424]: I1029 00:41:27.289728 2424 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 00:41:27.289960 kubelet[2424]: I1029 00:41:27.289765 2424 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 00:41:27.290369 kubelet[2424]: I1029 00:41:27.289963 2424 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 00:41:27.290369 kubelet[2424]: I1029 00:41:27.289973 2424 container_manager_linux.go:304] "Creating device plugin manager" Oct 29 00:41:27.290369 kubelet[2424]: I1029 00:41:27.290119 2424 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:41:27.292742 kubelet[2424]: I1029 00:41:27.292707 2424 kubelet.go:446] "Attempting to sync node with API server" Oct 29 00:41:27.292742 kubelet[2424]: I1029 00:41:27.292739 2424 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 00:41:27.292832 kubelet[2424]: I1029 00:41:27.292763 2424 kubelet.go:352] "Adding apiserver pod source" Oct 29 00:41:27.292832 kubelet[2424]: I1029 00:41:27.292776 2424 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 00:41:27.295506 kubelet[2424]: I1029 00:41:27.295479 2424 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 00:41:27.295958 kubelet[2424]: I1029 00:41:27.295813 2424 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 29 00:41:27.296240 kubelet[2424]: W1029 00:41:27.296175 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Oct 29 00:41:27.296326 kubelet[2424]: E1029 00:41:27.296237 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:41:27.296378 kubelet[2424]: W1029 00:41:27.296306 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Oct 29 00:41:27.296469 kubelet[2424]: E1029 00:41:27.296366 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:41:27.297077 kubelet[2424]: W1029 00:41:27.297051 2424 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 00:41:27.299538 kubelet[2424]: I1029 00:41:27.299497 2424 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 00:41:27.299538 kubelet[2424]: I1029 00:41:27.299535 2424 server.go:1287] "Started kubelet" Oct 29 00:41:27.299711 kubelet[2424]: I1029 00:41:27.299676 2424 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 00:41:27.300902 kubelet[2424]: I1029 00:41:27.300863 2424 server.go:479] "Adding debug handlers to kubelet server" Oct 29 00:41:27.305186 kubelet[2424]: I1029 00:41:27.305114 2424 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 00:41:27.307411 kubelet[2424]: I1029 00:41:27.305579 2424 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 00:41:27.307411 kubelet[2424]: I1029 00:41:27.306110 2424 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 00:41:27.307411 kubelet[2424]: I1029 00:41:27.306449 2424 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 00:41:27.307411 kubelet[2424]: E1029 00:41:27.304973 2424 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.76:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.76:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872cf70b8d49b0f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 00:41:27.299513103 +0000 UTC m=+0.575986522,LastTimestamp:2025-10-29 00:41:27.299513103 +0000 UTC m=+0.575986522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 00:41:27.307411 kubelet[2424]: E1029 00:41:27.307156 2424 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 00:41:27.307411 kubelet[2424]: E1029 00:41:27.307294 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:27.307411 kubelet[2424]: I1029 00:41:27.307330 2424 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 00:41:27.307642 kubelet[2424]: I1029 00:41:27.307543 2424 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 00:41:27.307642 kubelet[2424]: I1029 00:41:27.307598 2424 reconciler.go:26] "Reconciler: start to sync state" Oct 29 00:41:27.307983 kubelet[2424]: W1029 00:41:27.307936 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Oct 29 00:41:27.308059 kubelet[2424]: E1029 00:41:27.307999 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:41:27.308103 kubelet[2424]: I1029 00:41:27.308077 2424 factory.go:221] Registration of the systemd container factory successfully Oct 29 00:41:27.308197 kubelet[2424]: I1029 00:41:27.308173 2424 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 00:41:27.308345 kubelet[2424]: E1029 00:41:27.308313 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="200ms" Oct 29 00:41:27.309236 kubelet[2424]: I1029 00:41:27.309199 2424 factory.go:221] Registration of the containerd container factory successfully Oct 29 00:41:27.327856 kubelet[2424]: I1029 00:41:27.327811 2424 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 00:41:27.327856 kubelet[2424]: I1029 00:41:27.327841 2424 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 00:41:27.328037 kubelet[2424]: I1029 00:41:27.327883 2424 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:41:27.329839 kubelet[2424]: I1029 00:41:27.329791 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 29 00:41:27.331461 kubelet[2424]: I1029 00:41:27.331211 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 29 00:41:27.331461 kubelet[2424]: I1029 00:41:27.331240 2424 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 29 00:41:27.331461 kubelet[2424]: I1029 00:41:27.331271 2424 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 00:41:27.331461 kubelet[2424]: I1029 00:41:27.331282 2424 kubelet.go:2382] "Starting kubelet main sync loop" Oct 29 00:41:27.331461 kubelet[2424]: E1029 00:41:27.331338 2424 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 00:41:27.363059 kubelet[2424]: W1029 00:41:27.362996 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Oct 29 00:41:27.363059 kubelet[2424]: E1029 00:41:27.363057 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:41:27.408470 kubelet[2424]: E1029 00:41:27.408438 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:27.431653 kubelet[2424]: E1029 00:41:27.431618 2424 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 00:41:27.509139 kubelet[2424]: E1029 00:41:27.509000 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:27.509508 kubelet[2424]: E1029 00:41:27.509472 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="400ms" Oct 29 00:41:27.610110 kubelet[2424]: E1029 00:41:27.610008 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:27.632377 kubelet[2424]: E1029 00:41:27.632283 2424 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 00:41:27.710808 kubelet[2424]: E1029 00:41:27.710737 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:27.811399 kubelet[2424]: E1029 00:41:27.811328 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:27.910334 kubelet[2424]: E1029 00:41:27.910273 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="800ms" Oct 29 00:41:27.912311 kubelet[2424]: E1029 00:41:27.912254 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:28.012826 kubelet[2424]: E1029 00:41:28.012763 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:28.033032 kubelet[2424]: E1029 00:41:28.032950 2424 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 00:41:28.113693 kubelet[2424]: E1029 00:41:28.113532 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:28.214292 kubelet[2424]: E1029 00:41:28.214222 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:28.261902 kubelet[2424]: W1029 00:41:28.261845 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Oct 29 00:41:28.261902 kubelet[2424]: E1029 00:41:28.261902 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:41:28.315178 kubelet[2424]: E1029 00:41:28.315110 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:28.358893 kubelet[2424]: W1029 00:41:28.358813 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Oct 29 00:41:28.358893 kubelet[2424]: E1029 00:41:28.358876 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:41:28.415651 kubelet[2424]: E1029 00:41:28.415419 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:28.476041 kubelet[2424]: I1029 00:41:28.475965 2424 policy_none.go:49] "None policy: Start" Oct 29 00:41:28.476041 kubelet[2424]: I1029 00:41:28.476021 2424 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 00:41:28.476041 kubelet[2424]: I1029 00:41:28.476049 2424 state_mem.go:35] "Initializing new in-memory state store" Oct 29 00:41:28.484915 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 29 00:41:28.496098 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 29 00:41:28.499394 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 29 00:41:28.509522 kubelet[2424]: I1029 00:41:28.509484 2424 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 29 00:41:28.509821 kubelet[2424]: I1029 00:41:28.509779 2424 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 00:41:28.510019 kubelet[2424]: I1029 00:41:28.509804 2424 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 00:41:28.510148 kubelet[2424]: I1029 00:41:28.510118 2424 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 00:41:28.511758 kubelet[2424]: E1029 00:41:28.511728 2424 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 00:41:28.511811 kubelet[2424]: E1029 00:41:28.511780 2424 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 29 00:41:28.589435 kubelet[2424]: W1029 00:41:28.589324 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Oct 29 00:41:28.589435 kubelet[2424]: E1029 00:41:28.589437 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:41:28.611736 kubelet[2424]: I1029 00:41:28.611678 2424 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:41:28.612228 kubelet[2424]: E1029 00:41:28.612166 2424 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Oct 29 00:41:28.711377 kubelet[2424]: E1029 00:41:28.711204 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="1.6s" Oct 29 00:41:28.731147 kubelet[2424]: W1029 00:41:28.731070 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused Oct 29 00:41:28.731147 kubelet[2424]: E1029 00:41:28.731145 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:41:28.813883 kubelet[2424]: I1029 00:41:28.813812 2424 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:41:28.814363 kubelet[2424]: E1029 00:41:28.814317 2424 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Oct 29 00:41:28.841664 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 29 00:41:28.863813 kubelet[2424]: E1029 00:41:28.863757 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:41:28.867470 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 29 00:41:28.878135 kubelet[2424]: E1029 00:41:28.878082 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:41:28.881149 systemd[1]: Created slice kubepods-burstable-pod583bf533257d80a5603e87f8dcb9b0ae.slice - libcontainer container kubepods-burstable-pod583bf533257d80a5603e87f8dcb9b0ae.slice. Oct 29 00:41:28.882992 kubelet[2424]: E1029 00:41:28.882961 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:41:28.919418 kubelet[2424]: I1029 00:41:28.919359 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/583bf533257d80a5603e87f8dcb9b0ae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"583bf533257d80a5603e87f8dcb9b0ae\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:28.919418 kubelet[2424]: I1029 00:41:28.919414 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/583bf533257d80a5603e87f8dcb9b0ae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"583bf533257d80a5603e87f8dcb9b0ae\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:28.919575 kubelet[2424]: I1029 00:41:28.919433 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:28.919575 kubelet[2424]: I1029 00:41:28.919448 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:28.919575 kubelet[2424]: I1029 00:41:28.919467 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:28.919575 kubelet[2424]: I1029 00:41:28.919483 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:28.919575 kubelet[2424]: I1029 00:41:28.919497 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/583bf533257d80a5603e87f8dcb9b0ae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"583bf533257d80a5603e87f8dcb9b0ae\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:28.919727 kubelet[2424]: I1029 00:41:28.919578 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:28.919727 kubelet[2424]: I1029 00:41:28.919632 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:29.165588 kubelet[2424]: E1029 00:41:29.165524 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:29.166411 containerd[1636]: time="2025-10-29T00:41:29.166350639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 29 00:41:29.181134 kubelet[2424]: E1029 00:41:29.180689 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:29.182822 containerd[1636]: time="2025-10-29T00:41:29.182745666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 29 00:41:29.184208 kubelet[2424]: E1029 00:41:29.184164 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:29.184727 containerd[1636]: time="2025-10-29T00:41:29.184666218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:583bf533257d80a5603e87f8dcb9b0ae,Namespace:kube-system,Attempt:0,}" Oct 29 00:41:29.210418 containerd[1636]: time="2025-10-29T00:41:29.210017340Z" level=info msg="connecting to shim be32728ef5d1a116a38989de4020ca7dd872145a283e8cece3040d0a9db1a553" address="unix:///run/containerd/s/96c7decb79cdca6f6aea3f96fafb379617d703f9bc4a492f1a0b821db9b58b18" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:29.215323 containerd[1636]: time="2025-10-29T00:41:29.215266244Z" level=info msg="connecting to shim f252cfe17f13d37a1380b11029737c3bed4d9a4fcfc7e1d99223f07dd4937997" address="unix:///run/containerd/s/d3d453ea9c1961519e26bdbc230e2588e0d5d09022aa797b4e248bb352b92b48" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:29.216259 kubelet[2424]: I1029 00:41:29.216210 2424 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:41:29.216606 kubelet[2424]: E1029 00:41:29.216573 2424 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Oct 29 00:41:29.230170 containerd[1636]: time="2025-10-29T00:41:29.230108669Z" level=info msg="connecting to shim 16efb46d4455fb22a44758fb4535917414fbb3517df62515dcdc25ba0aa0c8af" address="unix:///run/containerd/s/8c964fc35e4389dcaeea4e4bf329f38177f3eac6e006d4ff14cad09d64c51e60" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:29.412596 systemd[1]: Started cri-containerd-be32728ef5d1a116a38989de4020ca7dd872145a283e8cece3040d0a9db1a553.scope - libcontainer container be32728ef5d1a116a38989de4020ca7dd872145a283e8cece3040d0a9db1a553. Oct 29 00:41:29.414576 systemd[1]: Started cri-containerd-f252cfe17f13d37a1380b11029737c3bed4d9a4fcfc7e1d99223f07dd4937997.scope - libcontainer container f252cfe17f13d37a1380b11029737c3bed4d9a4fcfc7e1d99223f07dd4937997. Oct 29 00:41:29.419305 systemd[1]: Started cri-containerd-16efb46d4455fb22a44758fb4535917414fbb3517df62515dcdc25ba0aa0c8af.scope - libcontainer container 16efb46d4455fb22a44758fb4535917414fbb3517df62515dcdc25ba0aa0c8af. Oct 29 00:41:29.464650 kubelet[2424]: E1029 00:41:29.464564 2424 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:41:29.482754 containerd[1636]: time="2025-10-29T00:41:29.482625979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f252cfe17f13d37a1380b11029737c3bed4d9a4fcfc7e1d99223f07dd4937997\"" Oct 29 00:41:29.486594 kubelet[2424]: E1029 00:41:29.485794 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:29.487971 containerd[1636]: time="2025-10-29T00:41:29.487934454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:583bf533257d80a5603e87f8dcb9b0ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"16efb46d4455fb22a44758fb4535917414fbb3517df62515dcdc25ba0aa0c8af\"" Oct 29 00:41:29.488328 containerd[1636]: time="2025-10-29T00:41:29.488299198Z" level=info msg="CreateContainer within sandbox \"f252cfe17f13d37a1380b11029737c3bed4d9a4fcfc7e1d99223f07dd4937997\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 00:41:29.489972 kubelet[2424]: E1029 00:41:29.489934 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:29.491682 containerd[1636]: time="2025-10-29T00:41:29.491639843Z" level=info msg="CreateContainer within sandbox \"16efb46d4455fb22a44758fb4535917414fbb3517df62515dcdc25ba0aa0c8af\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 00:41:29.491992 containerd[1636]: time="2025-10-29T00:41:29.491959362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"be32728ef5d1a116a38989de4020ca7dd872145a283e8cece3040d0a9db1a553\"" Oct 29 00:41:29.493030 kubelet[2424]: E1029 00:41:29.493006 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:29.495449 containerd[1636]: time="2025-10-29T00:41:29.495415113Z" level=info msg="CreateContainer within sandbox \"be32728ef5d1a116a38989de4020ca7dd872145a283e8cece3040d0a9db1a553\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 00:41:29.504676 containerd[1636]: time="2025-10-29T00:41:29.504632248Z" level=info msg="Container 6beea4b370356a9b04e26471d382fe9b19d5accd95687f438042c5e744f6240c: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:41:29.506741 containerd[1636]: time="2025-10-29T00:41:29.506621799Z" level=info msg="Container 997832ab7c88a26b8a017f5922210ce0179b57bc7145417a2d4fa8e533fac555: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:41:29.514131 containerd[1636]: time="2025-10-29T00:41:29.514046664Z" level=info msg="CreateContainer within sandbox \"f252cfe17f13d37a1380b11029737c3bed4d9a4fcfc7e1d99223f07dd4937997\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6beea4b370356a9b04e26471d382fe9b19d5accd95687f438042c5e744f6240c\"" Oct 29 00:41:29.515587 containerd[1636]: time="2025-10-29T00:41:29.514998509Z" level=info msg="StartContainer for \"6beea4b370356a9b04e26471d382fe9b19d5accd95687f438042c5e744f6240c\"" Oct 29 00:41:29.515587 containerd[1636]: time="2025-10-29T00:41:29.515048823Z" level=info msg="Container 841a4c8ea6e2116fcfd1b0c866f9a05a45e4d88c72f42bbcea0b70af21e6ce73: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:41:29.517176 containerd[1636]: time="2025-10-29T00:41:29.517129655Z" level=info msg="connecting to shim 6beea4b370356a9b04e26471d382fe9b19d5accd95687f438042c5e744f6240c" address="unix:///run/containerd/s/d3d453ea9c1961519e26bdbc230e2588e0d5d09022aa797b4e248bb352b92b48" protocol=ttrpc version=3 Oct 29 00:41:29.521976 containerd[1636]: time="2025-10-29T00:41:29.521933514Z" level=info msg="CreateContainer within sandbox \"16efb46d4455fb22a44758fb4535917414fbb3517df62515dcdc25ba0aa0c8af\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"997832ab7c88a26b8a017f5922210ce0179b57bc7145417a2d4fa8e533fac555\"" Oct 29 00:41:29.523410 containerd[1636]: time="2025-10-29T00:41:29.522551764Z" level=info msg="StartContainer for \"997832ab7c88a26b8a017f5922210ce0179b57bc7145417a2d4fa8e533fac555\"" Oct 29 00:41:29.524124 containerd[1636]: time="2025-10-29T00:41:29.524092293Z" level=info msg="connecting to shim 997832ab7c88a26b8a017f5922210ce0179b57bc7145417a2d4fa8e533fac555" address="unix:///run/containerd/s/8c964fc35e4389dcaeea4e4bf329f38177f3eac6e006d4ff14cad09d64c51e60" protocol=ttrpc version=3 Oct 29 00:41:29.525102 containerd[1636]: time="2025-10-29T00:41:29.525051001Z" level=info msg="CreateContainer within sandbox \"be32728ef5d1a116a38989de4020ca7dd872145a283e8cece3040d0a9db1a553\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"841a4c8ea6e2116fcfd1b0c866f9a05a45e4d88c72f42bbcea0b70af21e6ce73\"" Oct 29 00:41:29.525802 containerd[1636]: time="2025-10-29T00:41:29.525780869Z" level=info msg="StartContainer for \"841a4c8ea6e2116fcfd1b0c866f9a05a45e4d88c72f42bbcea0b70af21e6ce73\"" Oct 29 00:41:29.526903 containerd[1636]: time="2025-10-29T00:41:29.526851648Z" level=info msg="connecting to shim 841a4c8ea6e2116fcfd1b0c866f9a05a45e4d88c72f42bbcea0b70af21e6ce73" address="unix:///run/containerd/s/96c7decb79cdca6f6aea3f96fafb379617d703f9bc4a492f1a0b821db9b58b18" protocol=ttrpc version=3 Oct 29 00:41:29.552568 systemd[1]: Started cri-containerd-6beea4b370356a9b04e26471d382fe9b19d5accd95687f438042c5e744f6240c.scope - libcontainer container 6beea4b370356a9b04e26471d382fe9b19d5accd95687f438042c5e744f6240c. Oct 29 00:41:29.553988 systemd[1]: Started cri-containerd-997832ab7c88a26b8a017f5922210ce0179b57bc7145417a2d4fa8e533fac555.scope - libcontainer container 997832ab7c88a26b8a017f5922210ce0179b57bc7145417a2d4fa8e533fac555. Oct 29 00:41:29.557965 systemd[1]: Started cri-containerd-841a4c8ea6e2116fcfd1b0c866f9a05a45e4d88c72f42bbcea0b70af21e6ce73.scope - libcontainer container 841a4c8ea6e2116fcfd1b0c866f9a05a45e4d88c72f42bbcea0b70af21e6ce73. Oct 29 00:41:29.634180 containerd[1636]: time="2025-10-29T00:41:29.632263963Z" level=info msg="StartContainer for \"841a4c8ea6e2116fcfd1b0c866f9a05a45e4d88c72f42bbcea0b70af21e6ce73\" returns successfully" Oct 29 00:41:29.636435 containerd[1636]: time="2025-10-29T00:41:29.636121968Z" level=info msg="StartContainer for \"6beea4b370356a9b04e26471d382fe9b19d5accd95687f438042c5e744f6240c\" returns successfully" Oct 29 00:41:29.693853 containerd[1636]: time="2025-10-29T00:41:29.693684609Z" level=info msg="StartContainer for \"997832ab7c88a26b8a017f5922210ce0179b57bc7145417a2d4fa8e533fac555\" returns successfully" Oct 29 00:41:30.018719 kubelet[2424]: I1029 00:41:30.018281 2424 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:41:30.412266 kubelet[2424]: E1029 00:41:30.412071 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:41:30.412556 kubelet[2424]: E1029 00:41:30.412447 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:30.412866 kubelet[2424]: E1029 00:41:30.412602 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:41:30.412866 kubelet[2424]: E1029 00:41:30.412708 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:30.415690 kubelet[2424]: E1029 00:41:30.415665 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:41:30.415782 kubelet[2424]: E1029 00:41:30.415765 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:31.418402 kubelet[2424]: E1029 00:41:31.418355 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:41:31.419015 kubelet[2424]: E1029 00:41:31.418522 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:31.419015 kubelet[2424]: E1029 00:41:31.418754 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:41:31.419015 kubelet[2424]: E1029 00:41:31.418836 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:31.474099 kubelet[2424]: E1029 00:41:31.473966 2424 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 29 00:41:31.578718 kubelet[2424]: I1029 00:41:31.578637 2424 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 00:41:31.578718 kubelet[2424]: E1029 00:41:31.578715 2424 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 29 00:41:31.609098 kubelet[2424]: I1029 00:41:31.609042 2424 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:31.664413 kubelet[2424]: E1029 00:41:31.663100 2424 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:31.664413 kubelet[2424]: I1029 00:41:31.663142 2424 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:31.665146 kubelet[2424]: E1029 00:41:31.665098 2424 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:31.665146 kubelet[2424]: I1029 00:41:31.665140 2424 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:31.666888 kubelet[2424]: E1029 00:41:31.666862 2424 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:32.398460 kubelet[2424]: I1029 00:41:32.398404 2424 apiserver.go:52] "Watching apiserver" Oct 29 00:41:32.408459 kubelet[2424]: I1029 00:41:32.408420 2424 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 00:41:32.417719 kubelet[2424]: I1029 00:41:32.417692 2424 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:32.419289 kubelet[2424]: E1029 00:41:32.419268 2424 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:32.419797 kubelet[2424]: E1029 00:41:32.419449 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:34.158732 systemd[1]: Reload requested from client PID 2702 ('systemctl') (unit session-7.scope)... Oct 29 00:41:34.158750 systemd[1]: Reloading... Oct 29 00:41:34.240429 zram_generator::config[2746]: No configuration found. Oct 29 00:41:34.498557 systemd[1]: Reloading finished in 339 ms. Oct 29 00:41:34.532123 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:41:34.549874 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 00:41:34.550331 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:34.550431 systemd[1]: kubelet.service: Consumed 1.108s CPU time, 131.2M memory peak. Oct 29 00:41:34.553098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:41:34.806938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:41:34.822737 (kubelet)[2791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 00:41:35.277265 kubelet[2791]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:41:35.277265 kubelet[2791]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 00:41:35.277265 kubelet[2791]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:41:35.277691 kubelet[2791]: I1029 00:41:35.277314 2791 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 00:41:35.283393 kubelet[2791]: I1029 00:41:35.283343 2791 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 29 00:41:35.283393 kubelet[2791]: I1029 00:41:35.283369 2791 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 00:41:35.283590 kubelet[2791]: I1029 00:41:35.283571 2791 server.go:954] "Client rotation is on, will bootstrap in background" Oct 29 00:41:35.284632 kubelet[2791]: I1029 00:41:35.284617 2791 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 29 00:41:35.287451 kubelet[2791]: I1029 00:41:35.287415 2791 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 00:41:35.291700 kubelet[2791]: I1029 00:41:35.291673 2791 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 00:41:35.296542 kubelet[2791]: I1029 00:41:35.296525 2791 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 00:41:35.296759 kubelet[2791]: I1029 00:41:35.296726 2791 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 00:41:35.296910 kubelet[2791]: I1029 00:41:35.296753 2791 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 00:41:35.297020 kubelet[2791]: I1029 00:41:35.296915 2791 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 00:41:35.297020 kubelet[2791]: I1029 00:41:35.296923 2791 container_manager_linux.go:304] "Creating device plugin manager" Oct 29 00:41:35.297020 kubelet[2791]: I1029 00:41:35.296968 2791 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:41:35.297137 kubelet[2791]: I1029 00:41:35.297124 2791 kubelet.go:446] "Attempting to sync node with API server" Oct 29 00:41:35.298572 kubelet[2791]: I1029 00:41:35.298546 2791 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 00:41:35.298629 kubelet[2791]: I1029 00:41:35.298603 2791 kubelet.go:352] "Adding apiserver pod source" Oct 29 00:41:35.298629 kubelet[2791]: I1029 00:41:35.298616 2791 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 00:41:35.300507 kubelet[2791]: I1029 00:41:35.300490 2791 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 00:41:35.301230 kubelet[2791]: I1029 00:41:35.301150 2791 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 29 00:41:35.302042 kubelet[2791]: I1029 00:41:35.302028 2791 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 00:41:35.302173 kubelet[2791]: I1029 00:41:35.302163 2791 server.go:1287] "Started kubelet" Oct 29 00:41:35.303196 kubelet[2791]: I1029 00:41:35.303167 2791 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 00:41:35.303747 kubelet[2791]: I1029 00:41:35.303592 2791 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 00:41:35.304056 kubelet[2791]: I1029 00:41:35.304003 2791 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 00:41:35.304471 kubelet[2791]: I1029 00:41:35.304438 2791 server.go:479] "Adding debug handlers to kubelet server" Oct 29 00:41:35.306200 kubelet[2791]: I1029 00:41:35.306181 2791 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 00:41:35.307777 kubelet[2791]: I1029 00:41:35.307756 2791 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 00:41:35.308398 kubelet[2791]: I1029 00:41:35.307941 2791 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 00:41:35.309882 kubelet[2791]: E1029 00:41:35.309860 2791 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:41:35.310613 kubelet[2791]: I1029 00:41:35.310587 2791 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 00:41:35.310779 kubelet[2791]: I1029 00:41:35.310762 2791 reconciler.go:26] "Reconciler: start to sync state" Oct 29 00:41:35.314430 kubelet[2791]: I1029 00:41:35.314378 2791 factory.go:221] Registration of the systemd container factory successfully Oct 29 00:41:35.314550 kubelet[2791]: I1029 00:41:35.314512 2791 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 00:41:35.315747 kubelet[2791]: E1029 00:41:35.315724 2791 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 00:41:35.317557 kubelet[2791]: I1029 00:41:35.316794 2791 factory.go:221] Registration of the containerd container factory successfully Oct 29 00:41:35.325477 kubelet[2791]: I1029 00:41:35.325419 2791 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 29 00:41:35.327020 kubelet[2791]: I1029 00:41:35.326802 2791 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 29 00:41:35.327020 kubelet[2791]: I1029 00:41:35.326823 2791 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 29 00:41:35.327020 kubelet[2791]: I1029 00:41:35.326840 2791 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 00:41:35.327020 kubelet[2791]: I1029 00:41:35.326850 2791 kubelet.go:2382] "Starting kubelet main sync loop" Oct 29 00:41:35.327020 kubelet[2791]: E1029 00:41:35.326910 2791 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 00:41:35.347899 kubelet[2791]: I1029 00:41:35.347859 2791 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 00:41:35.349455 kubelet[2791]: I1029 00:41:35.349434 2791 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 00:41:35.349565 kubelet[2791]: I1029 00:41:35.349555 2791 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:41:35.349807 kubelet[2791]: I1029 00:41:35.349790 2791 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 00:41:35.349879 kubelet[2791]: I1029 00:41:35.349853 2791 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 00:41:35.349931 kubelet[2791]: I1029 00:41:35.349923 2791 policy_none.go:49] "None policy: Start" Oct 29 00:41:35.349999 kubelet[2791]: I1029 00:41:35.349989 2791 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 00:41:35.350054 kubelet[2791]: I1029 00:41:35.350045 2791 state_mem.go:35] "Initializing new in-memory state store" Oct 29 00:41:35.350249 kubelet[2791]: I1029 00:41:35.350236 2791 state_mem.go:75] "Updated machine memory state" Oct 29 00:41:35.355638 kubelet[2791]: I1029 00:41:35.355587 2791 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 29 00:41:35.355894 kubelet[2791]: I1029 00:41:35.355871 2791 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 00:41:35.355933 kubelet[2791]: I1029 00:41:35.355895 2791 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 00:41:35.356412 kubelet[2791]: I1029 00:41:35.356369 2791 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 00:41:35.357289 kubelet[2791]: E1029 00:41:35.357233 2791 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 00:41:35.428570 kubelet[2791]: I1029 00:41:35.428516 2791 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:35.429461 kubelet[2791]: I1029 00:41:35.429440 2791 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:35.430358 kubelet[2791]: I1029 00:41:35.430337 2791 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:35.469272 kubelet[2791]: I1029 00:41:35.468885 2791 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:41:35.478985 kubelet[2791]: I1029 00:41:35.478795 2791 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 29 00:41:35.478985 kubelet[2791]: I1029 00:41:35.478907 2791 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 00:41:35.512310 kubelet[2791]: I1029 00:41:35.511753 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/583bf533257d80a5603e87f8dcb9b0ae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"583bf533257d80a5603e87f8dcb9b0ae\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:35.512310 kubelet[2791]: I1029 00:41:35.511816 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:35.512310 kubelet[2791]: I1029 00:41:35.511836 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:35.512310 kubelet[2791]: I1029 00:41:35.511855 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:35.512310 kubelet[2791]: I1029 00:41:35.511873 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 29 00:41:35.512582 kubelet[2791]: I1029 00:41:35.511933 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/583bf533257d80a5603e87f8dcb9b0ae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"583bf533257d80a5603e87f8dcb9b0ae\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:35.512582 kubelet[2791]: I1029 00:41:35.511991 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/583bf533257d80a5603e87f8dcb9b0ae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"583bf533257d80a5603e87f8dcb9b0ae\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:41:35.512582 kubelet[2791]: I1029 00:41:35.512019 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:35.512582 kubelet[2791]: I1029 00:41:35.512033 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:41:35.734359 kubelet[2791]: E1029 00:41:35.733932 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:35.737480 kubelet[2791]: E1029 00:41:35.737439 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:35.737717 kubelet[2791]: E1029 00:41:35.737687 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:36.299411 kubelet[2791]: I1029 00:41:36.299352 2791 apiserver.go:52] "Watching apiserver" Oct 29 00:41:36.310731 kubelet[2791]: I1029 00:41:36.310686 2791 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 00:41:36.338781 kubelet[2791]: E1029 00:41:36.338727 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:36.338946 kubelet[2791]: E1029 00:41:36.338820 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:36.338946 kubelet[2791]: E1029 00:41:36.338745 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:36.341085 kubelet[2791]: I1029 00:41:36.341007 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.340987981 podStartE2EDuration="1.340987981s" podCreationTimestamp="2025-10-29 00:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:41:36.333549969 +0000 UTC m=+1.506733180" watchObservedRunningTime="2025-10-29 00:41:36.340987981 +0000 UTC m=+1.514171192" Oct 29 00:41:36.341199 kubelet[2791]: I1029 00:41:36.341162 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.341154781 podStartE2EDuration="1.341154781s" podCreationTimestamp="2025-10-29 00:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:41:36.340809197 +0000 UTC m=+1.513992408" watchObservedRunningTime="2025-10-29 00:41:36.341154781 +0000 UTC m=+1.514337992" Oct 29 00:41:36.359313 kubelet[2791]: I1029 00:41:36.359234 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.359214707 podStartE2EDuration="1.359214707s" podCreationTimestamp="2025-10-29 00:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:41:36.349913262 +0000 UTC m=+1.523096493" watchObservedRunningTime="2025-10-29 00:41:36.359214707 +0000 UTC m=+1.532397938" Oct 29 00:41:37.339707 kubelet[2791]: E1029 00:41:37.339668 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:37.340326 kubelet[2791]: E1029 00:41:37.339784 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:38.341421 kubelet[2791]: E1029 00:41:38.341356 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:38.346550 kubelet[2791]: E1029 00:41:38.346510 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:40.200977 kubelet[2791]: I1029 00:41:40.200935 2791 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 00:41:40.201468 containerd[1636]: time="2025-10-29T00:41:40.201260057Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 00:41:40.201733 kubelet[2791]: I1029 00:41:40.201542 2791 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 00:41:40.622674 systemd[1]: Created slice kubepods-besteffort-podd41bb40e_0461_482c_8fec_b7b5eeea3213.slice - libcontainer container kubepods-besteffort-podd41bb40e_0461_482c_8fec_b7b5eeea3213.slice. Oct 29 00:41:40.641234 kubelet[2791]: I1029 00:41:40.641147 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d41bb40e-0461-482c-8fec-b7b5eeea3213-xtables-lock\") pod \"kube-proxy-x2ctj\" (UID: \"d41bb40e-0461-482c-8fec-b7b5eeea3213\") " pod="kube-system/kube-proxy-x2ctj" Oct 29 00:41:40.641234 kubelet[2791]: I1029 00:41:40.641221 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d41bb40e-0461-482c-8fec-b7b5eeea3213-kube-proxy\") pod \"kube-proxy-x2ctj\" (UID: \"d41bb40e-0461-482c-8fec-b7b5eeea3213\") " pod="kube-system/kube-proxy-x2ctj" Oct 29 00:41:40.641452 kubelet[2791]: I1029 00:41:40.641252 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d41bb40e-0461-482c-8fec-b7b5eeea3213-lib-modules\") pod \"kube-proxy-x2ctj\" (UID: \"d41bb40e-0461-482c-8fec-b7b5eeea3213\") " pod="kube-system/kube-proxy-x2ctj" Oct 29 00:41:40.641452 kubelet[2791]: I1029 00:41:40.641278 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzg92\" (UniqueName: \"kubernetes.io/projected/d41bb40e-0461-482c-8fec-b7b5eeea3213-kube-api-access-xzg92\") pod \"kube-proxy-x2ctj\" (UID: \"d41bb40e-0461-482c-8fec-b7b5eeea3213\") " pod="kube-system/kube-proxy-x2ctj" Oct 29 00:41:40.935516 kubelet[2791]: E1029 00:41:40.934512 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:40.937111 containerd[1636]: time="2025-10-29T00:41:40.937057385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x2ctj,Uid:d41bb40e-0461-482c-8fec-b7b5eeea3213,Namespace:kube-system,Attempt:0,}" Oct 29 00:41:41.030701 systemd[1]: Created slice kubepods-besteffort-podd7159d88_ce16_4504_9be5_78d3402c4719.slice - libcontainer container kubepods-besteffort-podd7159d88_ce16_4504_9be5_78d3402c4719.slice. Oct 29 00:41:41.034193 containerd[1636]: time="2025-10-29T00:41:41.034040539Z" level=info msg="connecting to shim 2815d8a0e8e0e421e93b48aaaec6d7b8b93b43110f766518118732f5d646d848" address="unix:///run/containerd/s/7f1f043920d015a1fbe13c8edf9541d9c82c529c105fc7156328b390d5960e69" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:41.043824 kubelet[2791]: I1029 00:41:41.043792 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d7159d88-ce16-4504-9be5-78d3402c4719-var-lib-calico\") pod \"tigera-operator-7dcd859c48-bdtmb\" (UID: \"d7159d88-ce16-4504-9be5-78d3402c4719\") " pod="tigera-operator/tigera-operator-7dcd859c48-bdtmb" Oct 29 00:41:41.043949 kubelet[2791]: I1029 00:41:41.043828 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52d58\" (UniqueName: \"kubernetes.io/projected/d7159d88-ce16-4504-9be5-78d3402c4719-kube-api-access-52d58\") pod \"tigera-operator-7dcd859c48-bdtmb\" (UID: \"d7159d88-ce16-4504-9be5-78d3402c4719\") " pod="tigera-operator/tigera-operator-7dcd859c48-bdtmb" Oct 29 00:41:41.065529 systemd[1]: Started cri-containerd-2815d8a0e8e0e421e93b48aaaec6d7b8b93b43110f766518118732f5d646d848.scope - libcontainer container 2815d8a0e8e0e421e93b48aaaec6d7b8b93b43110f766518118732f5d646d848. Oct 29 00:41:41.089615 containerd[1636]: time="2025-10-29T00:41:41.089565605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x2ctj,Uid:d41bb40e-0461-482c-8fec-b7b5eeea3213,Namespace:kube-system,Attempt:0,} returns sandbox id \"2815d8a0e8e0e421e93b48aaaec6d7b8b93b43110f766518118732f5d646d848\"" Oct 29 00:41:41.090294 kubelet[2791]: E1029 00:41:41.090259 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:41.092622 containerd[1636]: time="2025-10-29T00:41:41.092583739Z" level=info msg="CreateContainer within sandbox \"2815d8a0e8e0e421e93b48aaaec6d7b8b93b43110f766518118732f5d646d848\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 00:41:41.103306 containerd[1636]: time="2025-10-29T00:41:41.103261911Z" level=info msg="Container 0281467801e9551877cd45b260b8d98c5f35c57e7ea32f66240ae00a69e83031: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:41:41.106674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4010027324.mount: Deactivated successfully. Oct 29 00:41:41.111123 containerd[1636]: time="2025-10-29T00:41:41.111087746Z" level=info msg="CreateContainer within sandbox \"2815d8a0e8e0e421e93b48aaaec6d7b8b93b43110f766518118732f5d646d848\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0281467801e9551877cd45b260b8d98c5f35c57e7ea32f66240ae00a69e83031\"" Oct 29 00:41:41.111740 containerd[1636]: time="2025-10-29T00:41:41.111700686Z" level=info msg="StartContainer for \"0281467801e9551877cd45b260b8d98c5f35c57e7ea32f66240ae00a69e83031\"" Oct 29 00:41:41.114105 containerd[1636]: time="2025-10-29T00:41:41.114083777Z" level=info msg="connecting to shim 0281467801e9551877cd45b260b8d98c5f35c57e7ea32f66240ae00a69e83031" address="unix:///run/containerd/s/7f1f043920d015a1fbe13c8edf9541d9c82c529c105fc7156328b390d5960e69" protocol=ttrpc version=3 Oct 29 00:41:41.133580 systemd[1]: Started cri-containerd-0281467801e9551877cd45b260b8d98c5f35c57e7ea32f66240ae00a69e83031.scope - libcontainer container 0281467801e9551877cd45b260b8d98c5f35c57e7ea32f66240ae00a69e83031. Oct 29 00:41:41.186375 containerd[1636]: time="2025-10-29T00:41:41.186252388Z" level=info msg="StartContainer for \"0281467801e9551877cd45b260b8d98c5f35c57e7ea32f66240ae00a69e83031\" returns successfully" Oct 29 00:41:41.336412 containerd[1636]: time="2025-10-29T00:41:41.335991485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-bdtmb,Uid:d7159d88-ce16-4504-9be5-78d3402c4719,Namespace:tigera-operator,Attempt:0,}" Oct 29 00:41:41.348650 kubelet[2791]: E1029 00:41:41.348613 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:41.359034 kubelet[2791]: I1029 00:41:41.358892 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x2ctj" podStartSLOduration=1.358867872 podStartE2EDuration="1.358867872s" podCreationTimestamp="2025-10-29 00:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:41:41.358309205 +0000 UTC m=+6.531492416" watchObservedRunningTime="2025-10-29 00:41:41.358867872 +0000 UTC m=+6.532051083" Oct 29 00:41:41.384278 containerd[1636]: time="2025-10-29T00:41:41.384206531Z" level=info msg="connecting to shim ce4cad110b7a4415d5e0f3fabd1b5e380528fc8591807107bea0eb68e7e17876" address="unix:///run/containerd/s/14d2eb83a40f9989c5d57f164747a26756341221a61e1861296f82c85defb733" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:41.421565 systemd[1]: Started cri-containerd-ce4cad110b7a4415d5e0f3fabd1b5e380528fc8591807107bea0eb68e7e17876.scope - libcontainer container ce4cad110b7a4415d5e0f3fabd1b5e380528fc8591807107bea0eb68e7e17876. Oct 29 00:41:41.567475 containerd[1636]: time="2025-10-29T00:41:41.567418161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-bdtmb,Uid:d7159d88-ce16-4504-9be5-78d3402c4719,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ce4cad110b7a4415d5e0f3fabd1b5e380528fc8591807107bea0eb68e7e17876\"" Oct 29 00:41:41.568893 containerd[1636]: time="2025-10-29T00:41:41.568861818Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 29 00:41:43.093486 kubelet[2791]: E1029 00:41:43.093429 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:43.353530 kubelet[2791]: E1029 00:41:43.353360 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:44.355270 kubelet[2791]: E1029 00:41:44.355229 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:44.954733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2163820633.mount: Deactivated successfully. Oct 29 00:41:46.009618 containerd[1636]: time="2025-10-29T00:41:46.009536560Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:46.010406 containerd[1636]: time="2025-10-29T00:41:46.010323265Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 29 00:41:46.011748 containerd[1636]: time="2025-10-29T00:41:46.011650838Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:46.013819 containerd[1636]: time="2025-10-29T00:41:46.013782037Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:41:46.014896 containerd[1636]: time="2025-10-29T00:41:46.014863773Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.445965325s" Oct 29 00:41:46.014896 containerd[1636]: time="2025-10-29T00:41:46.014897206Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 29 00:41:46.018909 containerd[1636]: time="2025-10-29T00:41:46.018770125Z" level=info msg="CreateContainer within sandbox \"ce4cad110b7a4415d5e0f3fabd1b5e380528fc8591807107bea0eb68e7e17876\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 29 00:41:46.029101 containerd[1636]: time="2025-10-29T00:41:46.029045662Z" level=info msg="Container d21c18cb4b7a0fde49463e2f4bc472fbb4059a28c480cc889899a6eb423ac985: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:41:46.033113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2369527031.mount: Deactivated successfully. Oct 29 00:41:46.037138 containerd[1636]: time="2025-10-29T00:41:46.037088776Z" level=info msg="CreateContainer within sandbox \"ce4cad110b7a4415d5e0f3fabd1b5e380528fc8591807107bea0eb68e7e17876\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d21c18cb4b7a0fde49463e2f4bc472fbb4059a28c480cc889899a6eb423ac985\"" Oct 29 00:41:46.037685 containerd[1636]: time="2025-10-29T00:41:46.037663638Z" level=info msg="StartContainer for \"d21c18cb4b7a0fde49463e2f4bc472fbb4059a28c480cc889899a6eb423ac985\"" Oct 29 00:41:46.038554 containerd[1636]: time="2025-10-29T00:41:46.038530966Z" level=info msg="connecting to shim d21c18cb4b7a0fde49463e2f4bc472fbb4059a28c480cc889899a6eb423ac985" address="unix:///run/containerd/s/14d2eb83a40f9989c5d57f164747a26756341221a61e1861296f82c85defb733" protocol=ttrpc version=3 Oct 29 00:41:46.092544 systemd[1]: Started cri-containerd-d21c18cb4b7a0fde49463e2f4bc472fbb4059a28c480cc889899a6eb423ac985.scope - libcontainer container d21c18cb4b7a0fde49463e2f4bc472fbb4059a28c480cc889899a6eb423ac985. Oct 29 00:41:46.128450 containerd[1636]: time="2025-10-29T00:41:46.128364139Z" level=info msg="StartContainer for \"d21c18cb4b7a0fde49463e2f4bc472fbb4059a28c480cc889899a6eb423ac985\" returns successfully" Oct 29 00:41:46.370586 kubelet[2791]: I1029 00:41:46.370307 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-bdtmb" podStartSLOduration=1.922758348 podStartE2EDuration="6.370280426s" podCreationTimestamp="2025-10-29 00:41:40 +0000 UTC" firstStartedPulling="2025-10-29 00:41:41.568301208 +0000 UTC m=+6.741484419" lastFinishedPulling="2025-10-29 00:41:46.015823286 +0000 UTC m=+11.189006497" observedRunningTime="2025-10-29 00:41:46.369781427 +0000 UTC m=+11.542964649" watchObservedRunningTime="2025-10-29 00:41:46.370280426 +0000 UTC m=+11.543463637" Oct 29 00:41:47.520420 update_engine[1618]: I20251029 00:41:47.519503 1618 update_attempter.cc:509] Updating boot flags... Oct 29 00:41:48.347431 kubelet[2791]: E1029 00:41:48.346363 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:48.352233 kubelet[2791]: E1029 00:41:48.352148 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:51.877183 sudo[1844]: pam_unix(sudo:session): session closed for user root Oct 29 00:41:51.879004 sshd[1843]: Connection closed by 10.0.0.1 port 41810 Oct 29 00:41:51.880889 sshd-session[1840]: pam_unix(sshd:session): session closed for user core Oct 29 00:41:51.886022 systemd-logind[1612]: Session 7 logged out. Waiting for processes to exit. Oct 29 00:41:51.886898 systemd[1]: sshd@6-10.0.0.76:22-10.0.0.1:41810.service: Deactivated successfully. Oct 29 00:41:51.891738 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 00:41:51.892327 systemd[1]: session-7.scope: Consumed 5.371s CPU time, 218.7M memory peak. Oct 29 00:41:51.898850 systemd-logind[1612]: Removed session 7. Oct 29 00:41:56.966831 systemd[1]: Created slice kubepods-besteffort-podd3dcf0a9_2308_415f_a407_4817c868209d.slice - libcontainer container kubepods-besteffort-podd3dcf0a9_2308_415f_a407_4817c868209d.slice. Oct 29 00:41:57.039861 kubelet[2791]: I1029 00:41:57.039774 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3dcf0a9-2308-415f-a407-4817c868209d-tigera-ca-bundle\") pod \"calico-typha-58cf9db6f4-zkpld\" (UID: \"d3dcf0a9-2308-415f-a407-4817c868209d\") " pod="calico-system/calico-typha-58cf9db6f4-zkpld" Oct 29 00:41:57.039861 kubelet[2791]: I1029 00:41:57.039851 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt5kw\" (UniqueName: \"kubernetes.io/projected/d3dcf0a9-2308-415f-a407-4817c868209d-kube-api-access-qt5kw\") pod \"calico-typha-58cf9db6f4-zkpld\" (UID: \"d3dcf0a9-2308-415f-a407-4817c868209d\") " pod="calico-system/calico-typha-58cf9db6f4-zkpld" Oct 29 00:41:57.040352 kubelet[2791]: I1029 00:41:57.039881 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d3dcf0a9-2308-415f-a407-4817c868209d-typha-certs\") pod \"calico-typha-58cf9db6f4-zkpld\" (UID: \"d3dcf0a9-2308-415f-a407-4817c868209d\") " pod="calico-system/calico-typha-58cf9db6f4-zkpld" Oct 29 00:41:57.148494 systemd[1]: Created slice kubepods-besteffort-pod2470bff5_57c7_4934_ae1c_8f98abb6d354.slice - libcontainer container kubepods-besteffort-pod2470bff5_57c7_4934_ae1c_8f98abb6d354.slice. Oct 29 00:41:57.241353 kubelet[2791]: I1029 00:41:57.241129 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2470bff5-57c7-4934-ae1c-8f98abb6d354-cni-log-dir\") pod \"calico-node-9zp6b\" (UID: \"2470bff5-57c7-4934-ae1c-8f98abb6d354\") " pod="calico-system/calico-node-9zp6b" Oct 29 00:41:57.241353 kubelet[2791]: I1029 00:41:57.241187 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2470bff5-57c7-4934-ae1c-8f98abb6d354-lib-modules\") pod \"calico-node-9zp6b\" (UID: \"2470bff5-57c7-4934-ae1c-8f98abb6d354\") " pod="calico-system/calico-node-9zp6b" Oct 29 00:41:57.241353 kubelet[2791]: I1029 00:41:57.241212 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2470bff5-57c7-4934-ae1c-8f98abb6d354-cni-net-dir\") pod \"calico-node-9zp6b\" (UID: \"2470bff5-57c7-4934-ae1c-8f98abb6d354\") " pod="calico-system/calico-node-9zp6b" Oct 29 00:41:57.241353 kubelet[2791]: I1029 00:41:57.241232 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2470bff5-57c7-4934-ae1c-8f98abb6d354-var-run-calico\") pod \"calico-node-9zp6b\" (UID: \"2470bff5-57c7-4934-ae1c-8f98abb6d354\") " pod="calico-system/calico-node-9zp6b" Oct 29 00:41:57.241353 kubelet[2791]: I1029 00:41:57.241247 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2470bff5-57c7-4934-ae1c-8f98abb6d354-xtables-lock\") pod \"calico-node-9zp6b\" (UID: \"2470bff5-57c7-4934-ae1c-8f98abb6d354\") " pod="calico-system/calico-node-9zp6b" Oct 29 00:41:57.241713 kubelet[2791]: I1029 00:41:57.241266 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twxkh\" (UniqueName: \"kubernetes.io/projected/2470bff5-57c7-4934-ae1c-8f98abb6d354-kube-api-access-twxkh\") pod \"calico-node-9zp6b\" (UID: \"2470bff5-57c7-4934-ae1c-8f98abb6d354\") " pod="calico-system/calico-node-9zp6b" Oct 29 00:41:57.241713 kubelet[2791]: I1029 00:41:57.241288 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2470bff5-57c7-4934-ae1c-8f98abb6d354-policysync\") pod \"calico-node-9zp6b\" (UID: \"2470bff5-57c7-4934-ae1c-8f98abb6d354\") " pod="calico-system/calico-node-9zp6b" Oct 29 00:41:57.241713 kubelet[2791]: I1029 00:41:57.241311 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2470bff5-57c7-4934-ae1c-8f98abb6d354-var-lib-calico\") pod \"calico-node-9zp6b\" (UID: \"2470bff5-57c7-4934-ae1c-8f98abb6d354\") " pod="calico-system/calico-node-9zp6b" Oct 29 00:41:57.241713 kubelet[2791]: I1029 00:41:57.241325 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2470bff5-57c7-4934-ae1c-8f98abb6d354-cni-bin-dir\") pod \"calico-node-9zp6b\" (UID: \"2470bff5-57c7-4934-ae1c-8f98abb6d354\") " pod="calico-system/calico-node-9zp6b" Oct 29 00:41:57.241713 kubelet[2791]: I1029 00:41:57.241340 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2470bff5-57c7-4934-ae1c-8f98abb6d354-flexvol-driver-host\") pod \"calico-node-9zp6b\" (UID: \"2470bff5-57c7-4934-ae1c-8f98abb6d354\") " pod="calico-system/calico-node-9zp6b" Oct 29 00:41:57.241909 kubelet[2791]: I1029 00:41:57.241356 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2470bff5-57c7-4934-ae1c-8f98abb6d354-node-certs\") pod \"calico-node-9zp6b\" (UID: \"2470bff5-57c7-4934-ae1c-8f98abb6d354\") " pod="calico-system/calico-node-9zp6b" Oct 29 00:41:57.241909 kubelet[2791]: I1029 00:41:57.241373 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2470bff5-57c7-4934-ae1c-8f98abb6d354-tigera-ca-bundle\") pod \"calico-node-9zp6b\" (UID: \"2470bff5-57c7-4934-ae1c-8f98abb6d354\") " pod="calico-system/calico-node-9zp6b" Oct 29 00:41:57.272280 kubelet[2791]: E1029 00:41:57.272225 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:57.272996 containerd[1636]: time="2025-10-29T00:41:57.272709000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58cf9db6f4-zkpld,Uid:d3dcf0a9-2308-415f-a407-4817c868209d,Namespace:calico-system,Attempt:0,}" Oct 29 00:41:57.305913 containerd[1636]: time="2025-10-29T00:41:57.305854405Z" level=info msg="connecting to shim 7c7574a40743968e80ac53606cc0b9efc77b18afa2ab1a4cfbb5f2af5346b86a" address="unix:///run/containerd/s/ca728a8771627ce8b04e0572b020356755beb6d27166500a66c0e5db697c2555" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:57.338546 systemd[1]: Started cri-containerd-7c7574a40743968e80ac53606cc0b9efc77b18afa2ab1a4cfbb5f2af5346b86a.scope - libcontainer container 7c7574a40743968e80ac53606cc0b9efc77b18afa2ab1a4cfbb5f2af5346b86a. Oct 29 00:41:57.350259 kubelet[2791]: E1029 00:41:57.350170 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgj2m" podUID="384bfac2-527a-4555-ad7a-580a89495c1d" Oct 29 00:41:57.355741 kubelet[2791]: E1029 00:41:57.355700 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.355741 kubelet[2791]: W1029 00:41:57.355735 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.355878 kubelet[2791]: E1029 00:41:57.355777 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.359286 kubelet[2791]: E1029 00:41:57.359247 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.360750 kubelet[2791]: W1029 00:41:57.360726 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.362078 kubelet[2791]: E1029 00:41:57.362043 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.362683 kubelet[2791]: E1029 00:41:57.362667 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.362760 kubelet[2791]: W1029 00:41:57.362747 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.362816 kubelet[2791]: E1029 00:41:57.362805 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.363122 kubelet[2791]: E1029 00:41:57.363110 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.363191 kubelet[2791]: W1029 00:41:57.363180 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.363256 kubelet[2791]: E1029 00:41:57.363245 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.364448 kubelet[2791]: E1029 00:41:57.364434 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.364521 kubelet[2791]: W1029 00:41:57.364509 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.364583 kubelet[2791]: E1029 00:41:57.364571 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.366420 kubelet[2791]: E1029 00:41:57.365181 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.366420 kubelet[2791]: W1029 00:41:57.365440 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.366420 kubelet[2791]: E1029 00:41:57.365453 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.367714 kubelet[2791]: E1029 00:41:57.367546 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.367714 kubelet[2791]: W1029 00:41:57.367579 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.370332 kubelet[2791]: E1029 00:41:57.369283 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.402348 containerd[1636]: time="2025-10-29T00:41:57.402279196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58cf9db6f4-zkpld,Uid:d3dcf0a9-2308-415f-a407-4817c868209d,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c7574a40743968e80ac53606cc0b9efc77b18afa2ab1a4cfbb5f2af5346b86a\"" Oct 29 00:41:57.403847 kubelet[2791]: E1029 00:41:57.403800 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:57.406658 containerd[1636]: time="2025-10-29T00:41:57.406607275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 29 00:41:57.432333 kubelet[2791]: E1029 00:41:57.432286 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.432333 kubelet[2791]: W1029 00:41:57.432316 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.432333 kubelet[2791]: E1029 00:41:57.432340 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.432574 kubelet[2791]: E1029 00:41:57.432561 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.432574 kubelet[2791]: W1029 00:41:57.432569 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.432616 kubelet[2791]: E1029 00:41:57.432581 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.432823 kubelet[2791]: E1029 00:41:57.432780 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.432823 kubelet[2791]: W1029 00:41:57.432792 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.432823 kubelet[2791]: E1029 00:41:57.432801 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.433409 kubelet[2791]: E1029 00:41:57.433110 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.433409 kubelet[2791]: W1029 00:41:57.433122 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.433409 kubelet[2791]: E1029 00:41:57.433137 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.433409 kubelet[2791]: E1029 00:41:57.433410 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.434532 kubelet[2791]: W1029 00:41:57.433420 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.434532 kubelet[2791]: E1029 00:41:57.433430 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.434532 kubelet[2791]: E1029 00:41:57.433657 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.434532 kubelet[2791]: W1029 00:41:57.433665 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.434532 kubelet[2791]: E1029 00:41:57.433677 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.434532 kubelet[2791]: E1029 00:41:57.433889 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.434532 kubelet[2791]: W1029 00:41:57.433897 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.434532 kubelet[2791]: E1029 00:41:57.433905 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.434532 kubelet[2791]: E1029 00:41:57.434131 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.434532 kubelet[2791]: W1029 00:41:57.434140 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.434746 kubelet[2791]: E1029 00:41:57.434149 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.434746 kubelet[2791]: E1029 00:41:57.434378 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.434746 kubelet[2791]: W1029 00:41:57.434420 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.434746 kubelet[2791]: E1029 00:41:57.434430 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.434746 kubelet[2791]: E1029 00:41:57.434625 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.434746 kubelet[2791]: W1029 00:41:57.434633 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.434746 kubelet[2791]: E1029 00:41:57.434667 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.434901 kubelet[2791]: E1029 00:41:57.434873 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.434901 kubelet[2791]: W1029 00:41:57.434881 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.434901 kubelet[2791]: E1029 00:41:57.434890 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.435154 kubelet[2791]: E1029 00:41:57.435135 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.435154 kubelet[2791]: W1029 00:41:57.435148 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.435212 kubelet[2791]: E1029 00:41:57.435168 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.435404 kubelet[2791]: E1029 00:41:57.435370 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.435455 kubelet[2791]: W1029 00:41:57.435406 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.435455 kubelet[2791]: E1029 00:41:57.435415 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.435593 kubelet[2791]: E1029 00:41:57.435576 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.435593 kubelet[2791]: W1029 00:41:57.435588 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.435647 kubelet[2791]: E1029 00:41:57.435596 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.435868 kubelet[2791]: E1029 00:41:57.435847 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.435868 kubelet[2791]: W1029 00:41:57.435860 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.435868 kubelet[2791]: E1029 00:41:57.435868 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.436155 kubelet[2791]: E1029 00:41:57.436119 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.436208 kubelet[2791]: W1029 00:41:57.436151 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.436208 kubelet[2791]: E1029 00:41:57.436184 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.436516 kubelet[2791]: E1029 00:41:57.436499 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.436516 kubelet[2791]: W1029 00:41:57.436512 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.436586 kubelet[2791]: E1029 00:41:57.436523 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.436696 kubelet[2791]: E1029 00:41:57.436680 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.436696 kubelet[2791]: W1029 00:41:57.436691 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.436696 kubelet[2791]: E1029 00:41:57.436699 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.436861 kubelet[2791]: E1029 00:41:57.436849 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.436861 kubelet[2791]: W1029 00:41:57.436859 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.436861 kubelet[2791]: E1029 00:41:57.436867 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.437085 kubelet[2791]: E1029 00:41:57.437039 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.437085 kubelet[2791]: W1029 00:41:57.437048 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.437085 kubelet[2791]: E1029 00:41:57.437056 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.443604 kubelet[2791]: E1029 00:41:57.443571 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.443604 kubelet[2791]: W1029 00:41:57.443598 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.443604 kubelet[2791]: E1029 00:41:57.443622 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.443804 kubelet[2791]: I1029 00:41:57.443663 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/384bfac2-527a-4555-ad7a-580a89495c1d-socket-dir\") pod \"csi-node-driver-zgj2m\" (UID: \"384bfac2-527a-4555-ad7a-580a89495c1d\") " pod="calico-system/csi-node-driver-zgj2m" Oct 29 00:41:57.443935 kubelet[2791]: E1029 00:41:57.443909 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.443935 kubelet[2791]: W1029 00:41:57.443921 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.444053 kubelet[2791]: E1029 00:41:57.443945 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.444053 kubelet[2791]: I1029 00:41:57.443959 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/384bfac2-527a-4555-ad7a-580a89495c1d-varrun\") pod \"csi-node-driver-zgj2m\" (UID: \"384bfac2-527a-4555-ad7a-580a89495c1d\") " pod="calico-system/csi-node-driver-zgj2m" Oct 29 00:41:57.444233 kubelet[2791]: E1029 00:41:57.444216 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.444233 kubelet[2791]: W1029 00:41:57.444229 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.444300 kubelet[2791]: E1029 00:41:57.444250 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.444300 kubelet[2791]: I1029 00:41:57.444264 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrvg2\" (UniqueName: \"kubernetes.io/projected/384bfac2-527a-4555-ad7a-580a89495c1d-kube-api-access-mrvg2\") pod \"csi-node-driver-zgj2m\" (UID: \"384bfac2-527a-4555-ad7a-580a89495c1d\") " pod="calico-system/csi-node-driver-zgj2m" Oct 29 00:41:57.444549 kubelet[2791]: E1029 00:41:57.444529 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.444549 kubelet[2791]: W1029 00:41:57.444542 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.444600 kubelet[2791]: E1029 00:41:57.444557 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.444600 kubelet[2791]: I1029 00:41:57.444573 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/384bfac2-527a-4555-ad7a-580a89495c1d-registration-dir\") pod \"csi-node-driver-zgj2m\" (UID: \"384bfac2-527a-4555-ad7a-580a89495c1d\") " pod="calico-system/csi-node-driver-zgj2m" Oct 29 00:41:57.444883 kubelet[2791]: E1029 00:41:57.444860 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.444922 kubelet[2791]: W1029 00:41:57.444883 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.444922 kubelet[2791]: E1029 00:41:57.444908 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.445156 kubelet[2791]: E1029 00:41:57.445140 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.445156 kubelet[2791]: W1029 00:41:57.445151 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.445233 kubelet[2791]: E1029 00:41:57.445166 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.445354 kubelet[2791]: E1029 00:41:57.445342 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.445354 kubelet[2791]: W1029 00:41:57.445351 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.445409 kubelet[2791]: E1029 00:41:57.445364 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.445572 kubelet[2791]: E1029 00:41:57.445547 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.445572 kubelet[2791]: W1029 00:41:57.445557 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.445633 kubelet[2791]: E1029 00:41:57.445586 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.445769 kubelet[2791]: E1029 00:41:57.445742 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.445769 kubelet[2791]: W1029 00:41:57.445752 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.445842 kubelet[2791]: E1029 00:41:57.445780 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.446366 kubelet[2791]: E1029 00:41:57.445936 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.446366 kubelet[2791]: W1029 00:41:57.445948 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.446366 kubelet[2791]: E1029 00:41:57.445978 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.446366 kubelet[2791]: I1029 00:41:57.446000 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/384bfac2-527a-4555-ad7a-580a89495c1d-kubelet-dir\") pod \"csi-node-driver-zgj2m\" (UID: \"384bfac2-527a-4555-ad7a-580a89495c1d\") " pod="calico-system/csi-node-driver-zgj2m" Oct 29 00:41:57.446366 kubelet[2791]: E1029 00:41:57.446236 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.446366 kubelet[2791]: W1029 00:41:57.446245 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.446366 kubelet[2791]: E1029 00:41:57.446344 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.446759 kubelet[2791]: E1029 00:41:57.446730 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.446759 kubelet[2791]: W1029 00:41:57.446747 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.446759 kubelet[2791]: E1029 00:41:57.446756 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.447069 kubelet[2791]: E1029 00:41:57.447041 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.447069 kubelet[2791]: W1029 00:41:57.447053 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.447069 kubelet[2791]: E1029 00:41:57.447078 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.447276 kubelet[2791]: E1029 00:41:57.447259 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.447276 kubelet[2791]: W1029 00:41:57.447270 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.447276 kubelet[2791]: E1029 00:41:57.447278 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.447537 kubelet[2791]: E1029 00:41:57.447521 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.447537 kubelet[2791]: W1029 00:41:57.447534 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.447586 kubelet[2791]: E1029 00:41:57.447545 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.451768 kubelet[2791]: E1029 00:41:57.451721 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:57.452610 containerd[1636]: time="2025-10-29T00:41:57.452556689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9zp6b,Uid:2470bff5-57c7-4934-ae1c-8f98abb6d354,Namespace:calico-system,Attempt:0,}" Oct 29 00:41:57.476155 containerd[1636]: time="2025-10-29T00:41:57.476101581Z" level=info msg="connecting to shim 87c0941fe1cffd8bd6e8f0f30d728b199981f8cfb92b9b8355c2551aa7394df5" address="unix:///run/containerd/s/b151eebe067379c533c1fe19cf8b02f294f85e5262940896efab8b823e5b2126" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:41:57.506627 systemd[1]: Started cri-containerd-87c0941fe1cffd8bd6e8f0f30d728b199981f8cfb92b9b8355c2551aa7394df5.scope - libcontainer container 87c0941fe1cffd8bd6e8f0f30d728b199981f8cfb92b9b8355c2551aa7394df5. Oct 29 00:41:57.548364 kubelet[2791]: E1029 00:41:57.548316 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.548364 kubelet[2791]: W1029 00:41:57.548343 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.548364 kubelet[2791]: E1029 00:41:57.548366 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.548662 kubelet[2791]: E1029 00:41:57.548619 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.548662 kubelet[2791]: W1029 00:41:57.548627 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.548662 kubelet[2791]: E1029 00:41:57.548637 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.548893 kubelet[2791]: E1029 00:41:57.548862 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.548893 kubelet[2791]: W1029 00:41:57.548875 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.548893 kubelet[2791]: E1029 00:41:57.548889 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.549169 kubelet[2791]: E1029 00:41:57.549142 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.549169 kubelet[2791]: W1029 00:41:57.549166 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.549245 kubelet[2791]: E1029 00:41:57.549194 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.549417 kubelet[2791]: E1029 00:41:57.549379 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.549417 kubelet[2791]: W1029 00:41:57.549407 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.549516 kubelet[2791]: E1029 00:41:57.549421 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.549651 kubelet[2791]: E1029 00:41:57.549624 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.549651 kubelet[2791]: W1029 00:41:57.549645 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.549706 kubelet[2791]: E1029 00:41:57.549660 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.549919 kubelet[2791]: E1029 00:41:57.549900 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.549919 kubelet[2791]: W1029 00:41:57.549912 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.549993 kubelet[2791]: E1029 00:41:57.549927 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.550163 kubelet[2791]: E1029 00:41:57.550147 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.550163 kubelet[2791]: W1029 00:41:57.550158 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.550215 kubelet[2791]: E1029 00:41:57.550173 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.550367 kubelet[2791]: E1029 00:41:57.550351 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.550367 kubelet[2791]: W1029 00:41:57.550361 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.550446 kubelet[2791]: E1029 00:41:57.550421 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.550591 kubelet[2791]: E1029 00:41:57.550575 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.550591 kubelet[2791]: W1029 00:41:57.550586 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.550651 kubelet[2791]: E1029 00:41:57.550613 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.550777 kubelet[2791]: E1029 00:41:57.550761 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.550777 kubelet[2791]: W1029 00:41:57.550771 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.550839 kubelet[2791]: E1029 00:41:57.550805 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.550963 kubelet[2791]: E1029 00:41:57.550946 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.550963 kubelet[2791]: W1029 00:41:57.550957 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.551007 kubelet[2791]: E1029 00:41:57.550971 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.551182 kubelet[2791]: E1029 00:41:57.551166 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.551182 kubelet[2791]: W1029 00:41:57.551177 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.551233 kubelet[2791]: E1029 00:41:57.551194 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.551410 kubelet[2791]: E1029 00:41:57.551378 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.551410 kubelet[2791]: W1029 00:41:57.551405 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.551460 kubelet[2791]: E1029 00:41:57.551419 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.551610 kubelet[2791]: E1029 00:41:57.551594 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.551610 kubelet[2791]: W1029 00:41:57.551606 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.551657 kubelet[2791]: E1029 00:41:57.551621 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.551812 kubelet[2791]: E1029 00:41:57.551797 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.551812 kubelet[2791]: W1029 00:41:57.551810 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.551868 kubelet[2791]: E1029 00:41:57.551825 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.552012 kubelet[2791]: E1029 00:41:57.551996 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.552012 kubelet[2791]: W1029 00:41:57.552008 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.552072 kubelet[2791]: E1029 00:41:57.552024 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.552212 kubelet[2791]: E1029 00:41:57.552197 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.552212 kubelet[2791]: W1029 00:41:57.552207 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.552254 kubelet[2791]: E1029 00:41:57.552232 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.552414 kubelet[2791]: E1029 00:41:57.552398 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.552414 kubelet[2791]: W1029 00:41:57.552410 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.552467 kubelet[2791]: E1029 00:41:57.552442 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.552592 kubelet[2791]: E1029 00:41:57.552577 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.552592 kubelet[2791]: W1029 00:41:57.552587 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.552637 kubelet[2791]: E1029 00:41:57.552611 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.552810 kubelet[2791]: E1029 00:41:57.552795 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.552810 kubelet[2791]: W1029 00:41:57.552806 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.552860 kubelet[2791]: E1029 00:41:57.552821 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.553068 kubelet[2791]: E1029 00:41:57.553044 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.553105 kubelet[2791]: W1029 00:41:57.553069 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.553105 kubelet[2791]: E1029 00:41:57.553084 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.553263 kubelet[2791]: E1029 00:41:57.553251 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.553263 kubelet[2791]: W1029 00:41:57.553260 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.553311 kubelet[2791]: E1029 00:41:57.553268 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.553509 kubelet[2791]: E1029 00:41:57.553497 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.553509 kubelet[2791]: W1029 00:41:57.553507 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.553561 kubelet[2791]: E1029 00:41:57.553515 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.563237 kubelet[2791]: E1029 00:41:57.563196 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.563237 kubelet[2791]: W1029 00:41:57.563223 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.563413 kubelet[2791]: E1029 00:41:57.563249 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.579146 kubelet[2791]: E1029 00:41:57.579107 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:41:57.579146 kubelet[2791]: W1029 00:41:57.579128 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:41:57.579146 kubelet[2791]: E1029 00:41:57.579144 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:41:57.580639 containerd[1636]: time="2025-10-29T00:41:57.580574323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9zp6b,Uid:2470bff5-57c7-4934-ae1c-8f98abb6d354,Namespace:calico-system,Attempt:0,} returns sandbox id \"87c0941fe1cffd8bd6e8f0f30d728b199981f8cfb92b9b8355c2551aa7394df5\"" Oct 29 00:41:57.581447 kubelet[2791]: E1029 00:41:57.581400 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:41:59.328643 kubelet[2791]: E1029 00:41:59.328581 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgj2m" podUID="384bfac2-527a-4555-ad7a-580a89495c1d" Oct 29 00:41:59.568433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2457392378.mount: Deactivated successfully. Oct 29 00:42:00.038730 containerd[1636]: time="2025-10-29T00:42:00.038659776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:00.039544 containerd[1636]: time="2025-10-29T00:42:00.039511442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 29 00:42:00.040805 containerd[1636]: time="2025-10-29T00:42:00.040774013Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:00.042772 containerd[1636]: time="2025-10-29T00:42:00.042721344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:00.043204 containerd[1636]: time="2025-10-29T00:42:00.043175681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.636510327s" Oct 29 00:42:00.043232 containerd[1636]: time="2025-10-29T00:42:00.043204064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 29 00:42:00.049349 containerd[1636]: time="2025-10-29T00:42:00.049322240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 29 00:42:00.065765 containerd[1636]: time="2025-10-29T00:42:00.065719065Z" level=info msg="CreateContainer within sandbox \"7c7574a40743968e80ac53606cc0b9efc77b18afa2ab1a4cfbb5f2af5346b86a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 29 00:42:00.073862 containerd[1636]: time="2025-10-29T00:42:00.073821622Z" level=info msg="Container 339bce2af34291013dd52b666c2ca928836a32927a95677fa46a8c3842e399a9: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:42:00.081870 containerd[1636]: time="2025-10-29T00:42:00.081808962Z" level=info msg="CreateContainer within sandbox \"7c7574a40743968e80ac53606cc0b9efc77b18afa2ab1a4cfbb5f2af5346b86a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"339bce2af34291013dd52b666c2ca928836a32927a95677fa46a8c3842e399a9\"" Oct 29 00:42:00.082459 containerd[1636]: time="2025-10-29T00:42:00.082421987Z" level=info msg="StartContainer for \"339bce2af34291013dd52b666c2ca928836a32927a95677fa46a8c3842e399a9\"" Oct 29 00:42:00.084638 containerd[1636]: time="2025-10-29T00:42:00.084611565Z" level=info msg="connecting to shim 339bce2af34291013dd52b666c2ca928836a32927a95677fa46a8c3842e399a9" address="unix:///run/containerd/s/ca728a8771627ce8b04e0572b020356755beb6d27166500a66c0e5db697c2555" protocol=ttrpc version=3 Oct 29 00:42:00.108597 systemd[1]: Started cri-containerd-339bce2af34291013dd52b666c2ca928836a32927a95677fa46a8c3842e399a9.scope - libcontainer container 339bce2af34291013dd52b666c2ca928836a32927a95677fa46a8c3842e399a9. Oct 29 00:42:00.163851 containerd[1636]: time="2025-10-29T00:42:00.163761066Z" level=info msg="StartContainer for \"339bce2af34291013dd52b666c2ca928836a32927a95677fa46a8c3842e399a9\" returns successfully" Oct 29 00:42:00.388489 kubelet[2791]: E1029 00:42:00.388318 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:00.456108 kubelet[2791]: E1029 00:42:00.456067 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.456108 kubelet[2791]: W1029 00:42:00.456095 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.456108 kubelet[2791]: E1029 00:42:00.456122 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.456459 kubelet[2791]: E1029 00:42:00.456334 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.456459 kubelet[2791]: W1029 00:42:00.456345 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.456459 kubelet[2791]: E1029 00:42:00.456356 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.456611 kubelet[2791]: E1029 00:42:00.456585 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.456611 kubelet[2791]: W1029 00:42:00.456598 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.456689 kubelet[2791]: E1029 00:42:00.456612 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.456914 kubelet[2791]: E1029 00:42:00.456890 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.456914 kubelet[2791]: W1029 00:42:00.456902 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.456914 kubelet[2791]: E1029 00:42:00.456913 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.457156 kubelet[2791]: E1029 00:42:00.457131 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.457156 kubelet[2791]: W1029 00:42:00.457143 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.457156 kubelet[2791]: E1029 00:42:00.457154 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.457495 kubelet[2791]: E1029 00:42:00.457477 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.457495 kubelet[2791]: W1029 00:42:00.457491 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.457599 kubelet[2791]: E1029 00:42:00.457505 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.457761 kubelet[2791]: E1029 00:42:00.457707 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.457761 kubelet[2791]: W1029 00:42:00.457722 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.457761 kubelet[2791]: E1029 00:42:00.457733 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.458055 kubelet[2791]: E1029 00:42:00.457997 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.458055 kubelet[2791]: W1029 00:42:00.458039 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.458055 kubelet[2791]: E1029 00:42:00.458052 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.458281 kubelet[2791]: E1029 00:42:00.458264 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.458281 kubelet[2791]: W1029 00:42:00.458275 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.458334 kubelet[2791]: E1029 00:42:00.458286 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.458515 kubelet[2791]: E1029 00:42:00.458494 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.458515 kubelet[2791]: W1029 00:42:00.458511 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.458614 kubelet[2791]: E1029 00:42:00.458524 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.458791 kubelet[2791]: E1029 00:42:00.458767 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.458791 kubelet[2791]: W1029 00:42:00.458779 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.458791 kubelet[2791]: E1029 00:42:00.458789 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.458968 kubelet[2791]: E1029 00:42:00.458949 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.458968 kubelet[2791]: W1029 00:42:00.458959 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.458968 kubelet[2791]: E1029 00:42:00.458968 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.459234 kubelet[2791]: E1029 00:42:00.459206 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.459274 kubelet[2791]: W1029 00:42:00.459237 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.459274 kubelet[2791]: E1029 00:42:00.459266 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.459598 kubelet[2791]: E1029 00:42:00.459565 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.459598 kubelet[2791]: W1029 00:42:00.459581 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.459598 kubelet[2791]: E1029 00:42:00.459597 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.459808 kubelet[2791]: E1029 00:42:00.459790 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.459808 kubelet[2791]: W1029 00:42:00.459803 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.459889 kubelet[2791]: E1029 00:42:00.459813 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.472100 kubelet[2791]: E1029 00:42:00.472075 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.472100 kubelet[2791]: W1029 00:42:00.472087 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.472100 kubelet[2791]: E1029 00:42:00.472098 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.472361 kubelet[2791]: E1029 00:42:00.472337 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.472361 kubelet[2791]: W1029 00:42:00.472347 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.472361 kubelet[2791]: E1029 00:42:00.472359 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.472599 kubelet[2791]: E1029 00:42:00.472568 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.472599 kubelet[2791]: W1029 00:42:00.472583 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.472693 kubelet[2791]: E1029 00:42:00.472603 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.472900 kubelet[2791]: E1029 00:42:00.472854 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.472900 kubelet[2791]: W1029 00:42:00.472888 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.472974 kubelet[2791]: E1029 00:42:00.472926 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.473208 kubelet[2791]: E1029 00:42:00.473182 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.473208 kubelet[2791]: W1029 00:42:00.473194 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.473208 kubelet[2791]: E1029 00:42:00.473208 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.473409 kubelet[2791]: E1029 00:42:00.473374 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.473409 kubelet[2791]: W1029 00:42:00.473404 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.473471 kubelet[2791]: E1029 00:42:00.473419 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.473607 kubelet[2791]: E1029 00:42:00.473591 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.473607 kubelet[2791]: W1029 00:42:00.473603 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.473769 kubelet[2791]: E1029 00:42:00.473631 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.473769 kubelet[2791]: E1029 00:42:00.473762 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.473825 kubelet[2791]: W1029 00:42:00.473772 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.473825 kubelet[2791]: E1029 00:42:00.473799 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.473974 kubelet[2791]: E1029 00:42:00.473958 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.473974 kubelet[2791]: W1029 00:42:00.473968 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.474060 kubelet[2791]: E1029 00:42:00.473982 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.474248 kubelet[2791]: E1029 00:42:00.474229 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.474248 kubelet[2791]: W1029 00:42:00.474241 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.474313 kubelet[2791]: E1029 00:42:00.474254 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.474496 kubelet[2791]: E1029 00:42:00.474475 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.474496 kubelet[2791]: W1029 00:42:00.474490 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.474584 kubelet[2791]: E1029 00:42:00.474508 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.474730 kubelet[2791]: E1029 00:42:00.474712 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.474730 kubelet[2791]: W1029 00:42:00.474725 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.474812 kubelet[2791]: E1029 00:42:00.474741 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.475041 kubelet[2791]: E1029 00:42:00.475020 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.475041 kubelet[2791]: W1029 00:42:00.475033 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.475133 kubelet[2791]: E1029 00:42:00.475047 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.475239 kubelet[2791]: E1029 00:42:00.475222 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.475239 kubelet[2791]: W1029 00:42:00.475232 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.475298 kubelet[2791]: E1029 00:42:00.475245 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.475486 kubelet[2791]: E1029 00:42:00.475467 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.475486 kubelet[2791]: W1029 00:42:00.475478 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.475574 kubelet[2791]: E1029 00:42:00.475509 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.475660 kubelet[2791]: E1029 00:42:00.475641 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.475660 kubelet[2791]: W1029 00:42:00.475651 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.475737 kubelet[2791]: E1029 00:42:00.475683 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.475841 kubelet[2791]: E1029 00:42:00.475822 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.475841 kubelet[2791]: W1029 00:42:00.475834 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.475841 kubelet[2791]: E1029 00:42:00.475842 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.476165 kubelet[2791]: E1029 00:42:00.476145 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:00.476165 kubelet[2791]: W1029 00:42:00.476158 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:00.476241 kubelet[2791]: E1029 00:42:00.476170 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:00.630180 kubelet[2791]: I1029 00:42:00.630083 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58cf9db6f4-zkpld" podStartSLOduration=1.986656169 podStartE2EDuration="4.62993643s" podCreationTimestamp="2025-10-29 00:41:56 +0000 UTC" firstStartedPulling="2025-10-29 00:41:57.405934906 +0000 UTC m=+22.579118118" lastFinishedPulling="2025-10-29 00:42:00.049215168 +0000 UTC m=+25.222398379" observedRunningTime="2025-10-29 00:42:00.629864604 +0000 UTC m=+25.803047815" watchObservedRunningTime="2025-10-29 00:42:00.62993643 +0000 UTC m=+25.803119631" Oct 29 00:42:01.327513 kubelet[2791]: E1029 00:42:01.327439 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgj2m" podUID="384bfac2-527a-4555-ad7a-580a89495c1d" Oct 29 00:42:01.390410 kubelet[2791]: I1029 00:42:01.390346 2791 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:42:01.390818 kubelet[2791]: E1029 00:42:01.390699 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:01.465884 kubelet[2791]: E1029 00:42:01.465851 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.465884 kubelet[2791]: W1029 00:42:01.465874 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.466059 kubelet[2791]: E1029 00:42:01.465897 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.466097 kubelet[2791]: E1029 00:42:01.466091 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.466119 kubelet[2791]: W1029 00:42:01.466099 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.466119 kubelet[2791]: E1029 00:42:01.466107 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.466285 kubelet[2791]: E1029 00:42:01.466272 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.466285 kubelet[2791]: W1029 00:42:01.466282 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.466354 kubelet[2791]: E1029 00:42:01.466290 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.466510 kubelet[2791]: E1029 00:42:01.466488 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.466510 kubelet[2791]: W1029 00:42:01.466507 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.466552 kubelet[2791]: E1029 00:42:01.466515 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.466717 kubelet[2791]: E1029 00:42:01.466695 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.466717 kubelet[2791]: W1029 00:42:01.466710 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.466717 kubelet[2791]: E1029 00:42:01.466721 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.466928 kubelet[2791]: E1029 00:42:01.466901 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.466928 kubelet[2791]: W1029 00:42:01.466916 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.466928 kubelet[2791]: E1029 00:42:01.466924 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.467102 kubelet[2791]: E1029 00:42:01.467091 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.467102 kubelet[2791]: W1029 00:42:01.467100 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.467148 kubelet[2791]: E1029 00:42:01.467107 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.467274 kubelet[2791]: E1029 00:42:01.467262 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.467274 kubelet[2791]: W1029 00:42:01.467272 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.467325 kubelet[2791]: E1029 00:42:01.467279 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.467485 kubelet[2791]: E1029 00:42:01.467471 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.467485 kubelet[2791]: W1029 00:42:01.467480 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.467540 kubelet[2791]: E1029 00:42:01.467489 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.467663 kubelet[2791]: E1029 00:42:01.467651 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.467663 kubelet[2791]: W1029 00:42:01.467660 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.467723 kubelet[2791]: E1029 00:42:01.467667 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.467869 kubelet[2791]: E1029 00:42:01.467856 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.467869 kubelet[2791]: W1029 00:42:01.467865 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.467918 kubelet[2791]: E1029 00:42:01.467873 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.468048 kubelet[2791]: E1029 00:42:01.468036 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.468048 kubelet[2791]: W1029 00:42:01.468045 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.468101 kubelet[2791]: E1029 00:42:01.468071 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.468251 kubelet[2791]: E1029 00:42:01.468238 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.468251 kubelet[2791]: W1029 00:42:01.468246 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.468318 kubelet[2791]: E1029 00:42:01.468254 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.468465 kubelet[2791]: E1029 00:42:01.468452 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.468465 kubelet[2791]: W1029 00:42:01.468461 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.468527 kubelet[2791]: E1029 00:42:01.468468 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.468675 kubelet[2791]: E1029 00:42:01.468642 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.468675 kubelet[2791]: W1029 00:42:01.468653 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.468675 kubelet[2791]: E1029 00:42:01.468660 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.479123 kubelet[2791]: E1029 00:42:01.479079 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.479123 kubelet[2791]: W1029 00:42:01.479107 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.479199 kubelet[2791]: E1029 00:42:01.479131 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.479398 kubelet[2791]: E1029 00:42:01.479356 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.479398 kubelet[2791]: W1029 00:42:01.479368 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.479465 kubelet[2791]: E1029 00:42:01.479412 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.479682 kubelet[2791]: E1029 00:42:01.479649 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.479682 kubelet[2791]: W1029 00:42:01.479673 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.479744 kubelet[2791]: E1029 00:42:01.479689 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.479920 kubelet[2791]: E1029 00:42:01.479892 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.479920 kubelet[2791]: W1029 00:42:01.479905 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.479982 kubelet[2791]: E1029 00:42:01.479922 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.480111 kubelet[2791]: E1029 00:42:01.480093 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.480111 kubelet[2791]: W1029 00:42:01.480106 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.480157 kubelet[2791]: E1029 00:42:01.480123 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.480348 kubelet[2791]: E1029 00:42:01.480332 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.480348 kubelet[2791]: W1029 00:42:01.480342 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.480428 kubelet[2791]: E1029 00:42:01.480355 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.480562 kubelet[2791]: E1029 00:42:01.480546 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.480562 kubelet[2791]: W1029 00:42:01.480555 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.480607 kubelet[2791]: E1029 00:42:01.480569 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.480771 kubelet[2791]: E1029 00:42:01.480756 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.480771 kubelet[2791]: W1029 00:42:01.480766 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.480819 kubelet[2791]: E1029 00:42:01.480797 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.480932 kubelet[2791]: E1029 00:42:01.480917 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.480932 kubelet[2791]: W1029 00:42:01.480927 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.480973 kubelet[2791]: E1029 00:42:01.480952 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.481096 kubelet[2791]: E1029 00:42:01.481082 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.481096 kubelet[2791]: W1029 00:42:01.481091 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.481143 kubelet[2791]: E1029 00:42:01.481109 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.481263 kubelet[2791]: E1029 00:42:01.481248 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.481263 kubelet[2791]: W1029 00:42:01.481257 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.481308 kubelet[2791]: E1029 00:42:01.481269 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.481486 kubelet[2791]: E1029 00:42:01.481468 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.481486 kubelet[2791]: W1029 00:42:01.481482 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.481558 kubelet[2791]: E1029 00:42:01.481498 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.481791 kubelet[2791]: E1029 00:42:01.481768 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.481791 kubelet[2791]: W1029 00:42:01.481783 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.481876 kubelet[2791]: E1029 00:42:01.481806 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.482047 kubelet[2791]: E1029 00:42:01.482022 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.482047 kubelet[2791]: W1029 00:42:01.482035 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.482047 kubelet[2791]: E1029 00:42:01.482045 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.482244 kubelet[2791]: E1029 00:42:01.482226 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.482244 kubelet[2791]: W1029 00:42:01.482238 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.482332 kubelet[2791]: E1029 00:42:01.482261 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.482410 kubelet[2791]: E1029 00:42:01.482394 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.482410 kubelet[2791]: W1029 00:42:01.482405 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.482633 kubelet[2791]: E1029 00:42:01.482603 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.482633 kubelet[2791]: W1029 00:42:01.482622 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.482633 kubelet[2791]: E1029 00:42:01.482637 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.482749 kubelet[2791]: E1029 00:42:01.482664 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:01.482956 kubelet[2791]: E1029 00:42:01.482942 2791 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:42:01.482956 kubelet[2791]: W1029 00:42:01.482952 2791 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:42:01.483040 kubelet[2791]: E1029 00:42:01.482962 2791 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:42:02.386515 containerd[1636]: time="2025-10-29T00:42:02.386449428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:02.387216 containerd[1636]: time="2025-10-29T00:42:02.387183070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 29 00:42:02.388499 containerd[1636]: time="2025-10-29T00:42:02.388410563Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:02.390819 containerd[1636]: time="2025-10-29T00:42:02.390781250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:02.391366 containerd[1636]: time="2025-10-29T00:42:02.391312932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.341800734s" Oct 29 00:42:02.391366 containerd[1636]: time="2025-10-29T00:42:02.391349120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 29 00:42:02.393235 containerd[1636]: time="2025-10-29T00:42:02.393201992Z" level=info msg="CreateContainer within sandbox \"87c0941fe1cffd8bd6e8f0f30d728b199981f8cfb92b9b8355c2551aa7394df5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 29 00:42:02.402474 containerd[1636]: time="2025-10-29T00:42:02.402432814Z" level=info msg="Container c07786db6a6ec6c452e1999fdc5b5d2520cb927b212a6ef6e1525cdfee3f214d: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:42:02.412400 containerd[1636]: time="2025-10-29T00:42:02.412324021Z" level=info msg="CreateContainer within sandbox \"87c0941fe1cffd8bd6e8f0f30d728b199981f8cfb92b9b8355c2551aa7394df5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c07786db6a6ec6c452e1999fdc5b5d2520cb927b212a6ef6e1525cdfee3f214d\"" Oct 29 00:42:02.413050 containerd[1636]: time="2025-10-29T00:42:02.413001868Z" level=info msg="StartContainer for \"c07786db6a6ec6c452e1999fdc5b5d2520cb927b212a6ef6e1525cdfee3f214d\"" Oct 29 00:42:02.414471 containerd[1636]: time="2025-10-29T00:42:02.414435119Z" level=info msg="connecting to shim c07786db6a6ec6c452e1999fdc5b5d2520cb927b212a6ef6e1525cdfee3f214d" address="unix:///run/containerd/s/b151eebe067379c533c1fe19cf8b02f294f85e5262940896efab8b823e5b2126" protocol=ttrpc version=3 Oct 29 00:42:02.444634 systemd[1]: Started cri-containerd-c07786db6a6ec6c452e1999fdc5b5d2520cb927b212a6ef6e1525cdfee3f214d.scope - libcontainer container c07786db6a6ec6c452e1999fdc5b5d2520cb927b212a6ef6e1525cdfee3f214d. Oct 29 00:42:02.506169 systemd[1]: cri-containerd-c07786db6a6ec6c452e1999fdc5b5d2520cb927b212a6ef6e1525cdfee3f214d.scope: Deactivated successfully. Oct 29 00:42:02.508792 containerd[1636]: time="2025-10-29T00:42:02.508752864Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c07786db6a6ec6c452e1999fdc5b5d2520cb927b212a6ef6e1525cdfee3f214d\" id:\"c07786db6a6ec6c452e1999fdc5b5d2520cb927b212a6ef6e1525cdfee3f214d\" pid:3526 exited_at:{seconds:1761698522 nanos:507820518}" Oct 29 00:42:02.666845 containerd[1636]: time="2025-10-29T00:42:02.666621172Z" level=info msg="received exit event container_id:\"c07786db6a6ec6c452e1999fdc5b5d2520cb927b212a6ef6e1525cdfee3f214d\" id:\"c07786db6a6ec6c452e1999fdc5b5d2520cb927b212a6ef6e1525cdfee3f214d\" pid:3526 exited_at:{seconds:1761698522 nanos:507820518}" Oct 29 00:42:02.671952 containerd[1636]: time="2025-10-29T00:42:02.671802134Z" level=info msg="StartContainer for \"c07786db6a6ec6c452e1999fdc5b5d2520cb927b212a6ef6e1525cdfee3f214d\" returns successfully" Oct 29 00:42:02.694323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c07786db6a6ec6c452e1999fdc5b5d2520cb927b212a6ef6e1525cdfee3f214d-rootfs.mount: Deactivated successfully. Oct 29 00:42:02.713439 kubelet[2791]: I1029 00:42:02.713374 2791 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:42:02.714103 kubelet[2791]: E1029 00:42:02.713801 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:03.327222 kubelet[2791]: E1029 00:42:03.327168 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgj2m" podUID="384bfac2-527a-4555-ad7a-580a89495c1d" Oct 29 00:42:03.396065 kubelet[2791]: E1029 00:42:03.395801 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:03.396065 kubelet[2791]: E1029 00:42:03.395867 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:03.397448 containerd[1636]: time="2025-10-29T00:42:03.397361192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 29 00:42:05.327888 kubelet[2791]: E1029 00:42:05.327817 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgj2m" podUID="384bfac2-527a-4555-ad7a-580a89495c1d" Oct 29 00:42:05.997468 containerd[1636]: time="2025-10-29T00:42:05.997416271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:05.998155 containerd[1636]: time="2025-10-29T00:42:05.998109386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 29 00:42:05.999224 containerd[1636]: time="2025-10-29T00:42:05.999156507Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:06.001599 containerd[1636]: time="2025-10-29T00:42:06.001552718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:06.002063 containerd[1636]: time="2025-10-29T00:42:06.002024717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.604603762s" Oct 29 00:42:06.002063 containerd[1636]: time="2025-10-29T00:42:06.002055304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 29 00:42:06.004106 containerd[1636]: time="2025-10-29T00:42:06.004081367Z" level=info msg="CreateContainer within sandbox \"87c0941fe1cffd8bd6e8f0f30d728b199981f8cfb92b9b8355c2551aa7394df5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 29 00:42:06.013765 containerd[1636]: time="2025-10-29T00:42:06.013717164Z" level=info msg="Container 2e324768ade3faa9bbf14ca006017b592872d682a6401479b8004fc4e3ca6ab5: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:42:06.028734 containerd[1636]: time="2025-10-29T00:42:06.028681049Z" level=info msg="CreateContainer within sandbox \"87c0941fe1cffd8bd6e8f0f30d728b199981f8cfb92b9b8355c2551aa7394df5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2e324768ade3faa9bbf14ca006017b592872d682a6401479b8004fc4e3ca6ab5\"" Oct 29 00:42:06.031300 containerd[1636]: time="2025-10-29T00:42:06.029271050Z" level=info msg="StartContainer for \"2e324768ade3faa9bbf14ca006017b592872d682a6401479b8004fc4e3ca6ab5\"" Oct 29 00:42:06.031300 containerd[1636]: time="2025-10-29T00:42:06.030626741Z" level=info msg="connecting to shim 2e324768ade3faa9bbf14ca006017b592872d682a6401479b8004fc4e3ca6ab5" address="unix:///run/containerd/s/b151eebe067379c533c1fe19cf8b02f294f85e5262940896efab8b823e5b2126" protocol=ttrpc version=3 Oct 29 00:42:06.064587 systemd[1]: Started cri-containerd-2e324768ade3faa9bbf14ca006017b592872d682a6401479b8004fc4e3ca6ab5.scope - libcontainer container 2e324768ade3faa9bbf14ca006017b592872d682a6401479b8004fc4e3ca6ab5. Oct 29 00:42:06.115691 containerd[1636]: time="2025-10-29T00:42:06.115614195Z" level=info msg="StartContainer for \"2e324768ade3faa9bbf14ca006017b592872d682a6401479b8004fc4e3ca6ab5\" returns successfully" Oct 29 00:42:06.426238 kubelet[2791]: E1029 00:42:06.426197 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:07.328020 kubelet[2791]: E1029 00:42:07.327971 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgj2m" podUID="384bfac2-527a-4555-ad7a-580a89495c1d" Oct 29 00:42:07.427377 kubelet[2791]: E1029 00:42:07.427329 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:07.584975 systemd[1]: cri-containerd-2e324768ade3faa9bbf14ca006017b592872d682a6401479b8004fc4e3ca6ab5.scope: Deactivated successfully. Oct 29 00:42:07.585345 systemd[1]: cri-containerd-2e324768ade3faa9bbf14ca006017b592872d682a6401479b8004fc4e3ca6ab5.scope: Consumed 703ms CPU time, 179.4M memory peak, 3.9M read from disk, 171.3M written to disk. Oct 29 00:42:07.586423 containerd[1636]: time="2025-10-29T00:42:07.586287543Z" level=info msg="received exit event container_id:\"2e324768ade3faa9bbf14ca006017b592872d682a6401479b8004fc4e3ca6ab5\" id:\"2e324768ade3faa9bbf14ca006017b592872d682a6401479b8004fc4e3ca6ab5\" pid:3585 exited_at:{seconds:1761698527 nanos:585613435}" Oct 29 00:42:07.586705 containerd[1636]: time="2025-10-29T00:42:07.586458354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e324768ade3faa9bbf14ca006017b592872d682a6401479b8004fc4e3ca6ab5\" id:\"2e324768ade3faa9bbf14ca006017b592872d682a6401479b8004fc4e3ca6ab5\" pid:3585 exited_at:{seconds:1761698527 nanos:585613435}" Oct 29 00:42:07.611002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e324768ade3faa9bbf14ca006017b592872d682a6401479b8004fc4e3ca6ab5-rootfs.mount: Deactivated successfully. Oct 29 00:42:07.647015 kubelet[2791]: I1029 00:42:07.646956 2791 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 29 00:42:07.899604 systemd[1]: Created slice kubepods-besteffort-podf8b0c773_fb85_46da_8c2b_d019ba347f69.slice - libcontainer container kubepods-besteffort-podf8b0c773_fb85_46da_8c2b_d019ba347f69.slice. Oct 29 00:42:07.922904 systemd[1]: Created slice kubepods-besteffort-pod6c784c32_7754_4ac4_867b_3b42e31408ba.slice - libcontainer container kubepods-besteffort-pod6c784c32_7754_4ac4_867b_3b42e31408ba.slice. Oct 29 00:42:07.926644 kubelet[2791]: I1029 00:42:07.926606 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f8b0c773-fb85-46da-8c2b-d019ba347f69-calico-apiserver-certs\") pod \"calico-apiserver-bfff496d5-kp5mz\" (UID: \"f8b0c773-fb85-46da-8c2b-d019ba347f69\") " pod="calico-apiserver/calico-apiserver-bfff496d5-kp5mz" Oct 29 00:42:07.926773 kubelet[2791]: I1029 00:42:07.926695 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6c784c32-7754-4ac4-867b-3b42e31408ba-calico-apiserver-certs\") pod \"calico-apiserver-bfff496d5-c6zrj\" (UID: \"6c784c32-7754-4ac4-867b-3b42e31408ba\") " pod="calico-apiserver/calico-apiserver-bfff496d5-c6zrj" Oct 29 00:42:07.926773 kubelet[2791]: I1029 00:42:07.926717 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76r46\" (UniqueName: \"kubernetes.io/projected/e17e9d9b-8d8f-42d8-ac35-8c432eed92e3-kube-api-access-76r46\") pod \"coredns-668d6bf9bc-lp2g2\" (UID: \"e17e9d9b-8d8f-42d8-ac35-8c432eed92e3\") " pod="kube-system/coredns-668d6bf9bc-lp2g2" Oct 29 00:42:07.926773 kubelet[2791]: I1029 00:42:07.926738 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4404b88e-26a3-4dc9-aeb9-4f42e9ede99f-whisker-ca-bundle\") pod \"whisker-5b7697596f-dhwsn\" (UID: \"4404b88e-26a3-4dc9-aeb9-4f42e9ede99f\") " pod="calico-system/whisker-5b7697596f-dhwsn" Oct 29 00:42:07.926773 kubelet[2791]: I1029 00:42:07.926767 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8171a21c-8e10-436f-a7d7-4945a41db439-tigera-ca-bundle\") pod \"calico-kube-controllers-7f6cdb67dc-b8vlf\" (UID: \"8171a21c-8e10-436f-a7d7-4945a41db439\") " pod="calico-system/calico-kube-controllers-7f6cdb67dc-b8vlf" Oct 29 00:42:07.926914 kubelet[2791]: I1029 00:42:07.926784 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85b4v\" (UniqueName: \"kubernetes.io/projected/8171a21c-8e10-436f-a7d7-4945a41db439-kube-api-access-85b4v\") pod \"calico-kube-controllers-7f6cdb67dc-b8vlf\" (UID: \"8171a21c-8e10-436f-a7d7-4945a41db439\") " pod="calico-system/calico-kube-controllers-7f6cdb67dc-b8vlf" Oct 29 00:42:07.926914 kubelet[2791]: I1029 00:42:07.926802 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/732e008d-cf68-4a53-8651-e4c01c54044b-config-volume\") pod \"coredns-668d6bf9bc-jb77m\" (UID: \"732e008d-cf68-4a53-8651-e4c01c54044b\") " pod="kube-system/coredns-668d6bf9bc-jb77m" Oct 29 00:42:07.926914 kubelet[2791]: I1029 00:42:07.926832 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvzfg\" (UniqueName: \"kubernetes.io/projected/732e008d-cf68-4a53-8651-e4c01c54044b-kube-api-access-vvzfg\") pod \"coredns-668d6bf9bc-jb77m\" (UID: \"732e008d-cf68-4a53-8651-e4c01c54044b\") " pod="kube-system/coredns-668d6bf9bc-jb77m" Oct 29 00:42:07.926914 kubelet[2791]: I1029 00:42:07.926861 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7-config\") pod \"goldmane-666569f655-859b7\" (UID: \"8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7\") " pod="calico-system/goldmane-666569f655-859b7" Oct 29 00:42:07.926914 kubelet[2791]: I1029 00:42:07.926879 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nktq\" (UniqueName: \"kubernetes.io/projected/f8b0c773-fb85-46da-8c2b-d019ba347f69-kube-api-access-6nktq\") pod \"calico-apiserver-bfff496d5-kp5mz\" (UID: \"f8b0c773-fb85-46da-8c2b-d019ba347f69\") " pod="calico-apiserver/calico-apiserver-bfff496d5-kp5mz" Oct 29 00:42:07.927066 kubelet[2791]: I1029 00:42:07.926896 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp6kh\" (UniqueName: \"kubernetes.io/projected/6c784c32-7754-4ac4-867b-3b42e31408ba-kube-api-access-tp6kh\") pod \"calico-apiserver-bfff496d5-c6zrj\" (UID: \"6c784c32-7754-4ac4-867b-3b42e31408ba\") " pod="calico-apiserver/calico-apiserver-bfff496d5-c6zrj" Oct 29 00:42:07.927066 kubelet[2791]: I1029 00:42:07.926912 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7-goldmane-key-pair\") pod \"goldmane-666569f655-859b7\" (UID: \"8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7\") " pod="calico-system/goldmane-666569f655-859b7" Oct 29 00:42:07.927066 kubelet[2791]: I1029 00:42:07.926925 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc26h\" (UniqueName: \"kubernetes.io/projected/8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7-kube-api-access-wc26h\") pod \"goldmane-666569f655-859b7\" (UID: \"8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7\") " pod="calico-system/goldmane-666569f655-859b7" Oct 29 00:42:07.927066 kubelet[2791]: I1029 00:42:07.926941 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6nr7\" (UniqueName: \"kubernetes.io/projected/4404b88e-26a3-4dc9-aeb9-4f42e9ede99f-kube-api-access-t6nr7\") pod \"whisker-5b7697596f-dhwsn\" (UID: \"4404b88e-26a3-4dc9-aeb9-4f42e9ede99f\") " pod="calico-system/whisker-5b7697596f-dhwsn" Oct 29 00:42:07.927066 kubelet[2791]: I1029 00:42:07.926956 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4404b88e-26a3-4dc9-aeb9-4f42e9ede99f-whisker-backend-key-pair\") pod \"whisker-5b7697596f-dhwsn\" (UID: \"4404b88e-26a3-4dc9-aeb9-4f42e9ede99f\") " pod="calico-system/whisker-5b7697596f-dhwsn" Oct 29 00:42:07.927223 kubelet[2791]: I1029 00:42:07.926971 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7-goldmane-ca-bundle\") pod \"goldmane-666569f655-859b7\" (UID: \"8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7\") " pod="calico-system/goldmane-666569f655-859b7" Oct 29 00:42:07.927223 kubelet[2791]: I1029 00:42:07.927048 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e17e9d9b-8d8f-42d8-ac35-8c432eed92e3-config-volume\") pod \"coredns-668d6bf9bc-lp2g2\" (UID: \"e17e9d9b-8d8f-42d8-ac35-8c432eed92e3\") " pod="kube-system/coredns-668d6bf9bc-lp2g2" Oct 29 00:42:07.933021 systemd[1]: Created slice kubepods-besteffort-pod8171a21c_8e10_436f_a7d7_4945a41db439.slice - libcontainer container kubepods-besteffort-pod8171a21c_8e10_436f_a7d7_4945a41db439.slice. Oct 29 00:42:07.940437 systemd[1]: Created slice kubepods-besteffort-pod4404b88e_26a3_4dc9_aeb9_4f42e9ede99f.slice - libcontainer container kubepods-besteffort-pod4404b88e_26a3_4dc9_aeb9_4f42e9ede99f.slice. Oct 29 00:42:07.949283 systemd[1]: Created slice kubepods-besteffort-pod8b1da9ce_4db1_4e33_ac07_f0fdf633e7c7.slice - libcontainer container kubepods-besteffort-pod8b1da9ce_4db1_4e33_ac07_f0fdf633e7c7.slice. Oct 29 00:42:07.956762 systemd[1]: Created slice kubepods-burstable-pode17e9d9b_8d8f_42d8_ac35_8c432eed92e3.slice - libcontainer container kubepods-burstable-pode17e9d9b_8d8f_42d8_ac35_8c432eed92e3.slice. Oct 29 00:42:07.965539 systemd[1]: Created slice kubepods-burstable-pod732e008d_cf68_4a53_8651_e4c01c54044b.slice - libcontainer container kubepods-burstable-pod732e008d_cf68_4a53_8651_e4c01c54044b.slice. Oct 29 00:42:08.216482 containerd[1636]: time="2025-10-29T00:42:08.216340702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfff496d5-kp5mz,Uid:f8b0c773-fb85-46da-8c2b-d019ba347f69,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:42:08.231456 containerd[1636]: time="2025-10-29T00:42:08.231367524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfff496d5-c6zrj,Uid:6c784c32-7754-4ac4-867b-3b42e31408ba,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:42:08.237254 containerd[1636]: time="2025-10-29T00:42:08.237201469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6cdb67dc-b8vlf,Uid:8171a21c-8e10-436f-a7d7-4945a41db439,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:08.246133 containerd[1636]: time="2025-10-29T00:42:08.245875177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b7697596f-dhwsn,Uid:4404b88e-26a3-4dc9-aeb9-4f42e9ede99f,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:08.253946 containerd[1636]: time="2025-10-29T00:42:08.253908402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-859b7,Uid:8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:08.266917 kubelet[2791]: E1029 00:42:08.265619 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:08.269896 kubelet[2791]: E1029 00:42:08.268976 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:08.307330 containerd[1636]: time="2025-10-29T00:42:08.307277438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jb77m,Uid:732e008d-cf68-4a53-8651-e4c01c54044b,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:08.324737 containerd[1636]: time="2025-10-29T00:42:08.324240812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lp2g2,Uid:e17e9d9b-8d8f-42d8-ac35-8c432eed92e3,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:08.374715 containerd[1636]: time="2025-10-29T00:42:08.374648809Z" level=error msg="Failed to destroy network for sandbox \"0fdd4f4f7c827139776922050d3ff5e0ec8bb396c94e533aebad1ebdc9118667\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.377051 containerd[1636]: time="2025-10-29T00:42:08.376907899Z" level=error msg="Failed to destroy network for sandbox \"c8f7a16990d5020408e9462c06b0485a37f0984aede7cc5d2f93ec817ecd07d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.378766 containerd[1636]: time="2025-10-29T00:42:08.378721982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfff496d5-kp5mz,Uid:f8b0c773-fb85-46da-8c2b-d019ba347f69,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fdd4f4f7c827139776922050d3ff5e0ec8bb396c94e533aebad1ebdc9118667\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.379627 kubelet[2791]: E1029 00:42:08.379547 2791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fdd4f4f7c827139776922050d3ff5e0ec8bb396c94e533aebad1ebdc9118667\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.379712 kubelet[2791]: E1029 00:42:08.379646 2791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fdd4f4f7c827139776922050d3ff5e0ec8bb396c94e533aebad1ebdc9118667\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bfff496d5-kp5mz" Oct 29 00:42:08.379712 kubelet[2791]: E1029 00:42:08.379667 2791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fdd4f4f7c827139776922050d3ff5e0ec8bb396c94e533aebad1ebdc9118667\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bfff496d5-kp5mz" Oct 29 00:42:08.379765 kubelet[2791]: E1029 00:42:08.379711 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bfff496d5-kp5mz_calico-apiserver(f8b0c773-fb85-46da-8c2b-d019ba347f69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bfff496d5-kp5mz_calico-apiserver(f8b0c773-fb85-46da-8c2b-d019ba347f69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fdd4f4f7c827139776922050d3ff5e0ec8bb396c94e533aebad1ebdc9118667\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bfff496d5-kp5mz" podUID="f8b0c773-fb85-46da-8c2b-d019ba347f69" Oct 29 00:42:08.403101 containerd[1636]: time="2025-10-29T00:42:08.402468197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6cdb67dc-b8vlf,Uid:8171a21c-8e10-436f-a7d7-4945a41db439,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8f7a16990d5020408e9462c06b0485a37f0984aede7cc5d2f93ec817ecd07d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.403426 kubelet[2791]: E1029 00:42:08.403332 2791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8f7a16990d5020408e9462c06b0485a37f0984aede7cc5d2f93ec817ecd07d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.403558 kubelet[2791]: E1029 00:42:08.403533 2791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8f7a16990d5020408e9462c06b0485a37f0984aede7cc5d2f93ec817ecd07d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f6cdb67dc-b8vlf" Oct 29 00:42:08.403602 kubelet[2791]: E1029 00:42:08.403565 2791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8f7a16990d5020408e9462c06b0485a37f0984aede7cc5d2f93ec817ecd07d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f6cdb67dc-b8vlf" Oct 29 00:42:08.403667 kubelet[2791]: E1029 00:42:08.403632 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f6cdb67dc-b8vlf_calico-system(8171a21c-8e10-436f-a7d7-4945a41db439)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f6cdb67dc-b8vlf_calico-system(8171a21c-8e10-436f-a7d7-4945a41db439)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8f7a16990d5020408e9462c06b0485a37f0984aede7cc5d2f93ec817ecd07d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f6cdb67dc-b8vlf" podUID="8171a21c-8e10-436f-a7d7-4945a41db439" Oct 29 00:42:08.413722 containerd[1636]: time="2025-10-29T00:42:08.413399213Z" level=error msg="Failed to destroy network for sandbox \"6c68db0e3f0e0b5ad712c8649ee24b6930c8b69a1f6b7a1ca7f347032fea9e48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.416750 containerd[1636]: time="2025-10-29T00:42:08.416707116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b7697596f-dhwsn,Uid:4404b88e-26a3-4dc9-aeb9-4f42e9ede99f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c68db0e3f0e0b5ad712c8649ee24b6930c8b69a1f6b7a1ca7f347032fea9e48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.416938 kubelet[2791]: E1029 00:42:08.416897 2791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c68db0e3f0e0b5ad712c8649ee24b6930c8b69a1f6b7a1ca7f347032fea9e48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.416990 kubelet[2791]: E1029 00:42:08.416955 2791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c68db0e3f0e0b5ad712c8649ee24b6930c8b69a1f6b7a1ca7f347032fea9e48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b7697596f-dhwsn" Oct 29 00:42:08.416990 kubelet[2791]: E1029 00:42:08.416974 2791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c68db0e3f0e0b5ad712c8649ee24b6930c8b69a1f6b7a1ca7f347032fea9e48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b7697596f-dhwsn" Oct 29 00:42:08.417051 kubelet[2791]: E1029 00:42:08.417012 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b7697596f-dhwsn_calico-system(4404b88e-26a3-4dc9-aeb9-4f42e9ede99f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b7697596f-dhwsn_calico-system(4404b88e-26a3-4dc9-aeb9-4f42e9ede99f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c68db0e3f0e0b5ad712c8649ee24b6930c8b69a1f6b7a1ca7f347032fea9e48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b7697596f-dhwsn" podUID="4404b88e-26a3-4dc9-aeb9-4f42e9ede99f" Oct 29 00:42:08.437122 kubelet[2791]: E1029 00:42:08.437073 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:08.440785 containerd[1636]: time="2025-10-29T00:42:08.440743938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 29 00:42:08.450599 containerd[1636]: time="2025-10-29T00:42:08.450545308Z" level=error msg="Failed to destroy network for sandbox \"fa951c04cb097b1f12ab3459b052a09725c570b8f21b34daa444a9940a88efb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.454583 containerd[1636]: time="2025-10-29T00:42:08.454377609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfff496d5-c6zrj,Uid:6c784c32-7754-4ac4-867b-3b42e31408ba,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa951c04cb097b1f12ab3459b052a09725c570b8f21b34daa444a9940a88efb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.456645 kubelet[2791]: E1029 00:42:08.456592 2791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa951c04cb097b1f12ab3459b052a09725c570b8f21b34daa444a9940a88efb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.456715 kubelet[2791]: E1029 00:42:08.456663 2791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa951c04cb097b1f12ab3459b052a09725c570b8f21b34daa444a9940a88efb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bfff496d5-c6zrj" Oct 29 00:42:08.456715 kubelet[2791]: E1029 00:42:08.456684 2791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa951c04cb097b1f12ab3459b052a09725c570b8f21b34daa444a9940a88efb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bfff496d5-c6zrj" Oct 29 00:42:08.456771 kubelet[2791]: E1029 00:42:08.456724 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bfff496d5-c6zrj_calico-apiserver(6c784c32-7754-4ac4-867b-3b42e31408ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bfff496d5-c6zrj_calico-apiserver(6c784c32-7754-4ac4-867b-3b42e31408ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa951c04cb097b1f12ab3459b052a09725c570b8f21b34daa444a9940a88efb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bfff496d5-c6zrj" podUID="6c784c32-7754-4ac4-867b-3b42e31408ba" Oct 29 00:42:08.457990 containerd[1636]: time="2025-10-29T00:42:08.456913098Z" level=error msg="Failed to destroy network for sandbox \"b7b0fe6aed3cc1dfc6e29f9e7a6d71bc3cc68c02bd4bdaa503192bfe623c617a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.460032 containerd[1636]: time="2025-10-29T00:42:08.460001979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-859b7,Uid:8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7b0fe6aed3cc1dfc6e29f9e7a6d71bc3cc68c02bd4bdaa503192bfe623c617a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.460804 kubelet[2791]: E1029 00:42:08.460573 2791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7b0fe6aed3cc1dfc6e29f9e7a6d71bc3cc68c02bd4bdaa503192bfe623c617a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.460804 kubelet[2791]: E1029 00:42:08.460662 2791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7b0fe6aed3cc1dfc6e29f9e7a6d71bc3cc68c02bd4bdaa503192bfe623c617a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-859b7" Oct 29 00:42:08.460804 kubelet[2791]: E1029 00:42:08.460721 2791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7b0fe6aed3cc1dfc6e29f9e7a6d71bc3cc68c02bd4bdaa503192bfe623c617a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-859b7" Oct 29 00:42:08.461071 kubelet[2791]: E1029 00:42:08.460764 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-859b7_calico-system(8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-859b7_calico-system(8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7b0fe6aed3cc1dfc6e29f9e7a6d71bc3cc68c02bd4bdaa503192bfe623c617a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-859b7" podUID="8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7" Oct 29 00:42:08.462276 containerd[1636]: time="2025-10-29T00:42:08.462222127Z" level=error msg="Failed to destroy network for sandbox \"a86ecb231d2ef9990a7af1ba1cdabf9d9ae82371aff07d0ad9c205a6f43099f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.464033 containerd[1636]: time="2025-10-29T00:42:08.463996304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jb77m,Uid:732e008d-cf68-4a53-8651-e4c01c54044b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a86ecb231d2ef9990a7af1ba1cdabf9d9ae82371aff07d0ad9c205a6f43099f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.464341 kubelet[2791]: E1029 00:42:08.464296 2791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a86ecb231d2ef9990a7af1ba1cdabf9d9ae82371aff07d0ad9c205a6f43099f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.464575 kubelet[2791]: E1029 00:42:08.464361 2791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a86ecb231d2ef9990a7af1ba1cdabf9d9ae82371aff07d0ad9c205a6f43099f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jb77m" Oct 29 00:42:08.464575 kubelet[2791]: E1029 00:42:08.464407 2791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a86ecb231d2ef9990a7af1ba1cdabf9d9ae82371aff07d0ad9c205a6f43099f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jb77m" Oct 29 00:42:08.464575 kubelet[2791]: E1029 00:42:08.464448 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jb77m_kube-system(732e008d-cf68-4a53-8651-e4c01c54044b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jb77m_kube-system(732e008d-cf68-4a53-8651-e4c01c54044b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a86ecb231d2ef9990a7af1ba1cdabf9d9ae82371aff07d0ad9c205a6f43099f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jb77m" podUID="732e008d-cf68-4a53-8651-e4c01c54044b" Oct 29 00:42:08.476341 containerd[1636]: time="2025-10-29T00:42:08.476196347Z" level=error msg="Failed to destroy network for sandbox \"d6f5d34ea9236ece44f67eb66052d1497b99465d102c70b0e24021da268bfda4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.478002 containerd[1636]: time="2025-10-29T00:42:08.477938995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lp2g2,Uid:e17e9d9b-8d8f-42d8-ac35-8c432eed92e3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6f5d34ea9236ece44f67eb66052d1497b99465d102c70b0e24021da268bfda4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.478244 kubelet[2791]: E1029 00:42:08.478193 2791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6f5d34ea9236ece44f67eb66052d1497b99465d102c70b0e24021da268bfda4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:08.478300 kubelet[2791]: E1029 00:42:08.478260 2791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6f5d34ea9236ece44f67eb66052d1497b99465d102c70b0e24021da268bfda4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lp2g2" Oct 29 00:42:08.478300 kubelet[2791]: E1029 00:42:08.478286 2791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6f5d34ea9236ece44f67eb66052d1497b99465d102c70b0e24021da268bfda4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lp2g2" Oct 29 00:42:08.478370 kubelet[2791]: E1029 00:42:08.478335 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lp2g2_kube-system(e17e9d9b-8d8f-42d8-ac35-8c432eed92e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lp2g2_kube-system(e17e9d9b-8d8f-42d8-ac35-8c432eed92e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6f5d34ea9236ece44f67eb66052d1497b99465d102c70b0e24021da268bfda4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lp2g2" podUID="e17e9d9b-8d8f-42d8-ac35-8c432eed92e3" Oct 29 00:42:08.611333 systemd[1]: run-netns-cni\x2d28fe36be\x2d7ee6\x2de8ad\x2da297\x2dd5da19f71211.mount: Deactivated successfully. Oct 29 00:42:09.335877 systemd[1]: Created slice kubepods-besteffort-pod384bfac2_527a_4555_ad7a_580a89495c1d.slice - libcontainer container kubepods-besteffort-pod384bfac2_527a_4555_ad7a_580a89495c1d.slice. Oct 29 00:42:09.339662 containerd[1636]: time="2025-10-29T00:42:09.339422017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgj2m,Uid:384bfac2-527a-4555-ad7a-580a89495c1d,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:09.405247 containerd[1636]: time="2025-10-29T00:42:09.405168295Z" level=error msg="Failed to destroy network for sandbox \"8f9cee8d174b779363c1aaeffeb58c9397592250df862d7a99c62b5e54d013a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:09.407775 containerd[1636]: time="2025-10-29T00:42:09.407726757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgj2m,Uid:384bfac2-527a-4555-ad7a-580a89495c1d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f9cee8d174b779363c1aaeffeb58c9397592250df862d7a99c62b5e54d013a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:09.408145 kubelet[2791]: E1029 00:42:09.408083 2791 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f9cee8d174b779363c1aaeffeb58c9397592250df862d7a99c62b5e54d013a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:42:09.408134 systemd[1]: run-netns-cni\x2dc06f17bf\x2d9bce\x2df6a4\x2dda06\x2d8b32d71b809a.mount: Deactivated successfully. Oct 29 00:42:09.408292 kubelet[2791]: E1029 00:42:09.408163 2791 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f9cee8d174b779363c1aaeffeb58c9397592250df862d7a99c62b5e54d013a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zgj2m" Oct 29 00:42:09.408292 kubelet[2791]: E1029 00:42:09.408191 2791 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f9cee8d174b779363c1aaeffeb58c9397592250df862d7a99c62b5e54d013a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zgj2m" Oct 29 00:42:09.408292 kubelet[2791]: E1029 00:42:09.408261 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zgj2m_calico-system(384bfac2-527a-4555-ad7a-580a89495c1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zgj2m_calico-system(384bfac2-527a-4555-ad7a-580a89495c1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f9cee8d174b779363c1aaeffeb58c9397592250df862d7a99c62b5e54d013a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zgj2m" podUID="384bfac2-527a-4555-ad7a-580a89495c1d" Oct 29 00:42:13.682148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount229486317.mount: Deactivated successfully. Oct 29 00:42:14.707288 containerd[1636]: time="2025-10-29T00:42:14.707119519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:14.708320 containerd[1636]: time="2025-10-29T00:42:14.708272547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 29 00:42:14.709892 containerd[1636]: time="2025-10-29T00:42:14.709853147Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:14.712428 containerd[1636]: time="2025-10-29T00:42:14.712354197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:42:14.712761 containerd[1636]: time="2025-10-29T00:42:14.712713703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.271929449s" Oct 29 00:42:14.712761 containerd[1636]: time="2025-10-29T00:42:14.712744521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 29 00:42:14.725867 containerd[1636]: time="2025-10-29T00:42:14.725808734Z" level=info msg="CreateContainer within sandbox \"87c0941fe1cffd8bd6e8f0f30d728b199981f8cfb92b9b8355c2551aa7394df5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 29 00:42:14.740287 containerd[1636]: time="2025-10-29T00:42:14.740220520Z" level=info msg="Container ed20490d18f0039df3a90eab75e00bee32479b32e400f556580bd98ae3bf0f0c: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:42:14.750002 containerd[1636]: time="2025-10-29T00:42:14.749949636Z" level=info msg="CreateContainer within sandbox \"87c0941fe1cffd8bd6e8f0f30d728b199981f8cfb92b9b8355c2551aa7394df5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ed20490d18f0039df3a90eab75e00bee32479b32e400f556580bd98ae3bf0f0c\"" Oct 29 00:42:14.750613 containerd[1636]: time="2025-10-29T00:42:14.750580041Z" level=info msg="StartContainer for \"ed20490d18f0039df3a90eab75e00bee32479b32e400f556580bd98ae3bf0f0c\"" Oct 29 00:42:14.752851 containerd[1636]: time="2025-10-29T00:42:14.752800454Z" level=info msg="connecting to shim ed20490d18f0039df3a90eab75e00bee32479b32e400f556580bd98ae3bf0f0c" address="unix:///run/containerd/s/b151eebe067379c533c1fe19cf8b02f294f85e5262940896efab8b823e5b2126" protocol=ttrpc version=3 Oct 29 00:42:14.782562 systemd[1]: Started cri-containerd-ed20490d18f0039df3a90eab75e00bee32479b32e400f556580bd98ae3bf0f0c.scope - libcontainer container ed20490d18f0039df3a90eab75e00bee32479b32e400f556580bd98ae3bf0f0c. Oct 29 00:42:14.839027 containerd[1636]: time="2025-10-29T00:42:14.838960817Z" level=info msg="StartContainer for \"ed20490d18f0039df3a90eab75e00bee32479b32e400f556580bd98ae3bf0f0c\" returns successfully" Oct 29 00:42:14.934288 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 29 00:42:14.935485 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 29 00:42:15.078580 kubelet[2791]: I1029 00:42:15.078499 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6nr7\" (UniqueName: \"kubernetes.io/projected/4404b88e-26a3-4dc9-aeb9-4f42e9ede99f-kube-api-access-t6nr7\") pod \"4404b88e-26a3-4dc9-aeb9-4f42e9ede99f\" (UID: \"4404b88e-26a3-4dc9-aeb9-4f42e9ede99f\") " Oct 29 00:42:15.079129 kubelet[2791]: I1029 00:42:15.078707 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4404b88e-26a3-4dc9-aeb9-4f42e9ede99f-whisker-backend-key-pair\") pod \"4404b88e-26a3-4dc9-aeb9-4f42e9ede99f\" (UID: \"4404b88e-26a3-4dc9-aeb9-4f42e9ede99f\") " Oct 29 00:42:15.079129 kubelet[2791]: I1029 00:42:15.078762 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4404b88e-26a3-4dc9-aeb9-4f42e9ede99f-whisker-ca-bundle\") pod \"4404b88e-26a3-4dc9-aeb9-4f42e9ede99f\" (UID: \"4404b88e-26a3-4dc9-aeb9-4f42e9ede99f\") " Oct 29 00:42:15.079865 kubelet[2791]: I1029 00:42:15.079330 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4404b88e-26a3-4dc9-aeb9-4f42e9ede99f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4404b88e-26a3-4dc9-aeb9-4f42e9ede99f" (UID: "4404b88e-26a3-4dc9-aeb9-4f42e9ede99f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 00:42:15.085247 kubelet[2791]: I1029 00:42:15.085203 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4404b88e-26a3-4dc9-aeb9-4f42e9ede99f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4404b88e-26a3-4dc9-aeb9-4f42e9ede99f" (UID: "4404b88e-26a3-4dc9-aeb9-4f42e9ede99f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 00:42:15.085533 kubelet[2791]: I1029 00:42:15.085446 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4404b88e-26a3-4dc9-aeb9-4f42e9ede99f-kube-api-access-t6nr7" (OuterVolumeSpecName: "kube-api-access-t6nr7") pod "4404b88e-26a3-4dc9-aeb9-4f42e9ede99f" (UID: "4404b88e-26a3-4dc9-aeb9-4f42e9ede99f"). InnerVolumeSpecName "kube-api-access-t6nr7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 00:42:15.179504 kubelet[2791]: I1029 00:42:15.179159 2791 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4404b88e-26a3-4dc9-aeb9-4f42e9ede99f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 29 00:42:15.180104 kubelet[2791]: I1029 00:42:15.180008 2791 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t6nr7\" (UniqueName: \"kubernetes.io/projected/4404b88e-26a3-4dc9-aeb9-4f42e9ede99f-kube-api-access-t6nr7\") on node \"localhost\" DevicePath \"\"" Oct 29 00:42:15.180104 kubelet[2791]: I1029 00:42:15.180041 2791 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4404b88e-26a3-4dc9-aeb9-4f42e9ede99f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 29 00:42:15.336170 systemd[1]: Removed slice kubepods-besteffort-pod4404b88e_26a3_4dc9_aeb9_4f42e9ede99f.slice - libcontainer container kubepods-besteffort-pod4404b88e_26a3_4dc9_aeb9_4f42e9ede99f.slice. Oct 29 00:42:15.458855 kubelet[2791]: E1029 00:42:15.458747 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:15.486056 kubelet[2791]: I1029 00:42:15.485886 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9zp6b" podStartSLOduration=1.354261338 podStartE2EDuration="18.485861604s" podCreationTimestamp="2025-10-29 00:41:57 +0000 UTC" firstStartedPulling="2025-10-29 00:41:57.581893413 +0000 UTC m=+22.755076614" lastFinishedPulling="2025-10-29 00:42:14.713493669 +0000 UTC m=+39.886676880" observedRunningTime="2025-10-29 00:42:15.482843002 +0000 UTC m=+40.656026233" watchObservedRunningTime="2025-10-29 00:42:15.485861604 +0000 UTC m=+40.659044825" Oct 29 00:42:15.558346 systemd[1]: Created slice kubepods-besteffort-pod27ae937a_cd86_446b_82dd_96e1560db234.slice - libcontainer container kubepods-besteffort-pod27ae937a_cd86_446b_82dd_96e1560db234.slice. Oct 29 00:42:15.584362 kubelet[2791]: I1029 00:42:15.584293 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27ae937a-cd86-446b-82dd-96e1560db234-whisker-ca-bundle\") pod \"whisker-975dfcd7c-99nd7\" (UID: \"27ae937a-cd86-446b-82dd-96e1560db234\") " pod="calico-system/whisker-975dfcd7c-99nd7" Oct 29 00:42:15.584362 kubelet[2791]: I1029 00:42:15.584362 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/27ae937a-cd86-446b-82dd-96e1560db234-whisker-backend-key-pair\") pod \"whisker-975dfcd7c-99nd7\" (UID: \"27ae937a-cd86-446b-82dd-96e1560db234\") " pod="calico-system/whisker-975dfcd7c-99nd7" Oct 29 00:42:15.584597 kubelet[2791]: I1029 00:42:15.584460 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xblp\" (UniqueName: \"kubernetes.io/projected/27ae937a-cd86-446b-82dd-96e1560db234-kube-api-access-7xblp\") pod \"whisker-975dfcd7c-99nd7\" (UID: \"27ae937a-cd86-446b-82dd-96e1560db234\") " pod="calico-system/whisker-975dfcd7c-99nd7" Oct 29 00:42:15.725759 systemd[1]: var-lib-kubelet-pods-4404b88e\x2d26a3\x2d4dc9\x2daeb9\x2d4f42e9ede99f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt6nr7.mount: Deactivated successfully. Oct 29 00:42:15.725912 systemd[1]: var-lib-kubelet-pods-4404b88e\x2d26a3\x2d4dc9\x2daeb9\x2d4f42e9ede99f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 29 00:42:15.866349 containerd[1636]: time="2025-10-29T00:42:15.866284158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-975dfcd7c-99nd7,Uid:27ae937a-cd86-446b-82dd-96e1560db234,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:16.150286 systemd-networkd[1520]: calidd48cb5cd77: Link UP Oct 29 00:42:16.151068 systemd-networkd[1520]: calidd48cb5cd77: Gained carrier Oct 29 00:42:16.170549 containerd[1636]: 2025-10-29 00:42:15.999 [INFO][3962] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:42:16.170549 containerd[1636]: 2025-10-29 00:42:16.020 [INFO][3962] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--975dfcd7c--99nd7-eth0 whisker-975dfcd7c- calico-system 27ae937a-cd86-446b-82dd-96e1560db234 904 0 2025-10-29 00:42:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:975dfcd7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-975dfcd7c-99nd7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidd48cb5cd77 [] [] }} ContainerID="5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" Namespace="calico-system" Pod="whisker-975dfcd7c-99nd7" WorkloadEndpoint="localhost-k8s-whisker--975dfcd7c--99nd7-" Oct 29 00:42:16.170549 containerd[1636]: 2025-10-29 00:42:16.020 [INFO][3962] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" Namespace="calico-system" Pod="whisker-975dfcd7c-99nd7" WorkloadEndpoint="localhost-k8s-whisker--975dfcd7c--99nd7-eth0" Oct 29 00:42:16.170549 containerd[1636]: 2025-10-29 00:42:16.099 [INFO][3974] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" HandleID="k8s-pod-network.5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" Workload="localhost-k8s-whisker--975dfcd7c--99nd7-eth0" Oct 29 00:42:16.170843 containerd[1636]: 2025-10-29 00:42:16.100 [INFO][3974] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" HandleID="k8s-pod-network.5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" Workload="localhost-k8s-whisker--975dfcd7c--99nd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003dfb60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-975dfcd7c-99nd7", "timestamp":"2025-10-29 00:42:16.099691871 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:16.170843 containerd[1636]: 2025-10-29 00:42:16.100 [INFO][3974] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:16.170843 containerd[1636]: 2025-10-29 00:42:16.100 [INFO][3974] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:16.170843 containerd[1636]: 2025-10-29 00:42:16.100 [INFO][3974] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:16.170843 containerd[1636]: 2025-10-29 00:42:16.109 [INFO][3974] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" host="localhost" Oct 29 00:42:16.170843 containerd[1636]: 2025-10-29 00:42:16.116 [INFO][3974] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:16.170843 containerd[1636]: 2025-10-29 00:42:16.121 [INFO][3974] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:16.170843 containerd[1636]: 2025-10-29 00:42:16.123 [INFO][3974] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:16.170843 containerd[1636]: 2025-10-29 00:42:16.125 [INFO][3974] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:16.170843 containerd[1636]: 2025-10-29 00:42:16.125 [INFO][3974] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" host="localhost" Oct 29 00:42:16.171149 containerd[1636]: 2025-10-29 00:42:16.127 [INFO][3974] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d Oct 29 00:42:16.171149 containerd[1636]: 2025-10-29 00:42:16.131 [INFO][3974] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" host="localhost" Oct 29 00:42:16.171149 containerd[1636]: 2025-10-29 00:42:16.137 [INFO][3974] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" host="localhost" Oct 29 00:42:16.171149 containerd[1636]: 2025-10-29 00:42:16.137 [INFO][3974] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" host="localhost" Oct 29 00:42:16.171149 containerd[1636]: 2025-10-29 00:42:16.137 [INFO][3974] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:16.171149 containerd[1636]: 2025-10-29 00:42:16.137 [INFO][3974] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" HandleID="k8s-pod-network.5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" Workload="localhost-k8s-whisker--975dfcd7c--99nd7-eth0" Oct 29 00:42:16.171318 containerd[1636]: 2025-10-29 00:42:16.141 [INFO][3962] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" Namespace="calico-system" Pod="whisker-975dfcd7c-99nd7" WorkloadEndpoint="localhost-k8s-whisker--975dfcd7c--99nd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--975dfcd7c--99nd7-eth0", GenerateName:"whisker-975dfcd7c-", Namespace:"calico-system", SelfLink:"", UID:"27ae937a-cd86-446b-82dd-96e1560db234", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"975dfcd7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-975dfcd7c-99nd7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidd48cb5cd77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:16.171318 containerd[1636]: 2025-10-29 00:42:16.141 [INFO][3962] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" Namespace="calico-system" Pod="whisker-975dfcd7c-99nd7" WorkloadEndpoint="localhost-k8s-whisker--975dfcd7c--99nd7-eth0" Oct 29 00:42:16.171481 containerd[1636]: 2025-10-29 00:42:16.141 [INFO][3962] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd48cb5cd77 ContainerID="5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" Namespace="calico-system" Pod="whisker-975dfcd7c-99nd7" WorkloadEndpoint="localhost-k8s-whisker--975dfcd7c--99nd7-eth0" Oct 29 00:42:16.171481 containerd[1636]: 2025-10-29 00:42:16.151 [INFO][3962] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" Namespace="calico-system" Pod="whisker-975dfcd7c-99nd7" WorkloadEndpoint="localhost-k8s-whisker--975dfcd7c--99nd7-eth0" Oct 29 00:42:16.171564 containerd[1636]: 2025-10-29 00:42:16.155 [INFO][3962] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" Namespace="calico-system" Pod="whisker-975dfcd7c-99nd7" WorkloadEndpoint="localhost-k8s-whisker--975dfcd7c--99nd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--975dfcd7c--99nd7-eth0", GenerateName:"whisker-975dfcd7c-", Namespace:"calico-system", SelfLink:"", UID:"27ae937a-cd86-446b-82dd-96e1560db234", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 42, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"975dfcd7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d", Pod:"whisker-975dfcd7c-99nd7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidd48cb5cd77", MAC:"7a:8d:a7:84:fd:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:16.171641 containerd[1636]: 2025-10-29 00:42:16.166 [INFO][3962] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" Namespace="calico-system" Pod="whisker-975dfcd7c-99nd7" WorkloadEndpoint="localhost-k8s-whisker--975dfcd7c--99nd7-eth0" Oct 29 00:42:16.432765 containerd[1636]: time="2025-10-29T00:42:16.432582003Z" level=info msg="connecting to shim 5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d" address="unix:///run/containerd/s/3cef66e5055c7e3a868a9381ad5930f632ddc70143b754f709028f26f90d8b39" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:16.483477 systemd[1]: Started cri-containerd-5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d.scope - libcontainer container 5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d. Oct 29 00:42:16.505067 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:16.696878 containerd[1636]: time="2025-10-29T00:42:16.696699416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-975dfcd7c-99nd7,Uid:27ae937a-cd86-446b-82dd-96e1560db234,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e97879322a0f77d181f56ae6d61b24201db2b1bbbe8a2646d387afe3277cd7d\"" Oct 29 00:42:16.698704 containerd[1636]: time="2025-10-29T00:42:16.698659508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:42:16.933366 systemd-networkd[1520]: vxlan.calico: Link UP Oct 29 00:42:16.933407 systemd-networkd[1520]: vxlan.calico: Gained carrier Oct 29 00:42:17.051378 containerd[1636]: time="2025-10-29T00:42:17.051288658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:17.052925 containerd[1636]: time="2025-10-29T00:42:17.052824533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:42:17.058469 containerd[1636]: time="2025-10-29T00:42:17.058313295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:42:17.058681 kubelet[2791]: E1029 00:42:17.058624 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:42:17.059002 kubelet[2791]: E1029 00:42:17.058698 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:42:17.068249 kubelet[2791]: E1029 00:42:17.068166 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:456170bd4865481e98f102db6aff1c52,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xblp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-975dfcd7c-99nd7_calico-system(27ae937a-cd86-446b-82dd-96e1560db234): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:17.072840 containerd[1636]: time="2025-10-29T00:42:17.072774474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:42:17.331334 kubelet[2791]: I1029 00:42:17.331157 2791 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4404b88e-26a3-4dc9-aeb9-4f42e9ede99f" path="/var/lib/kubelet/pods/4404b88e-26a3-4dc9-aeb9-4f42e9ede99f/volumes" Oct 29 00:42:17.415891 containerd[1636]: time="2025-10-29T00:42:17.415809331Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:17.590729 containerd[1636]: time="2025-10-29T00:42:17.590531756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:42:17.590729 containerd[1636]: time="2025-10-29T00:42:17.590587480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:42:17.590967 kubelet[2791]: E1029 00:42:17.590905 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:42:17.591032 kubelet[2791]: E1029 00:42:17.590963 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:42:17.591191 kubelet[2791]: E1029 00:42:17.591086 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xblp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-975dfcd7c-99nd7_calico-system(27ae937a-cd86-446b-82dd-96e1560db234): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:17.592718 kubelet[2791]: E1029 00:42:17.592658 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-975dfcd7c-99nd7" podUID="27ae937a-cd86-446b-82dd-96e1560db234" Oct 29 00:42:17.858594 systemd-networkd[1520]: calidd48cb5cd77: Gained IPv6LL Oct 29 00:42:18.178878 systemd-networkd[1520]: vxlan.calico: Gained IPv6LL Oct 29 00:42:18.491651 kubelet[2791]: E1029 00:42:18.491493 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-975dfcd7c-99nd7" podUID="27ae937a-cd86-446b-82dd-96e1560db234" Oct 29 00:42:19.732664 systemd[1]: Started sshd@7-10.0.0.76:22-10.0.0.1:40988.service - OpenSSH per-connection server daemon (10.0.0.1:40988). Oct 29 00:42:19.840558 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 40988 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:19.842884 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:19.849287 systemd-logind[1612]: New session 8 of user core. Oct 29 00:42:19.854650 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 29 00:42:20.013789 sshd[4249]: Connection closed by 10.0.0.1 port 40988 Oct 29 00:42:20.014147 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:20.019260 systemd[1]: sshd@7-10.0.0.76:22-10.0.0.1:40988.service: Deactivated successfully. Oct 29 00:42:20.021572 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 00:42:20.022347 systemd-logind[1612]: Session 8 logged out. Waiting for processes to exit. Oct 29 00:42:20.023611 systemd-logind[1612]: Removed session 8. Oct 29 00:42:20.328236 kubelet[2791]: E1029 00:42:20.328071 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:20.329219 containerd[1636]: time="2025-10-29T00:42:20.328448084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfff496d5-c6zrj,Uid:6c784c32-7754-4ac4-867b-3b42e31408ba,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:42:20.329219 containerd[1636]: time="2025-10-29T00:42:20.328822768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lp2g2,Uid:e17e9d9b-8d8f-42d8-ac35-8c432eed92e3,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:20.455892 systemd-networkd[1520]: calie83c0dca502: Link UP Oct 29 00:42:20.457103 systemd-networkd[1520]: calie83c0dca502: Gained carrier Oct 29 00:42:20.482613 containerd[1636]: 2025-10-29 00:42:20.376 [INFO][4262] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bfff496d5--c6zrj-eth0 calico-apiserver-bfff496d5- calico-apiserver 6c784c32-7754-4ac4-867b-3b42e31408ba 833 0 2025-10-29 00:41:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bfff496d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bfff496d5-c6zrj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie83c0dca502 [] [] }} ContainerID="063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-c6zrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--c6zrj-" Oct 29 00:42:20.482613 containerd[1636]: 2025-10-29 00:42:20.376 [INFO][4262] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-c6zrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--c6zrj-eth0" Oct 29 00:42:20.482613 containerd[1636]: 2025-10-29 00:42:20.410 [INFO][4290] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" HandleID="k8s-pod-network.063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" Workload="localhost-k8s-calico--apiserver--bfff496d5--c6zrj-eth0" Oct 29 00:42:20.482904 containerd[1636]: 2025-10-29 00:42:20.410 [INFO][4290] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" HandleID="k8s-pod-network.063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" Workload="localhost-k8s-calico--apiserver--bfff496d5--c6zrj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eda0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bfff496d5-c6zrj", "timestamp":"2025-10-29 00:42:20.410748785 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:20.482904 containerd[1636]: 2025-10-29 00:42:20.411 [INFO][4290] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:20.482904 containerd[1636]: 2025-10-29 00:42:20.411 [INFO][4290] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:20.482904 containerd[1636]: 2025-10-29 00:42:20.411 [INFO][4290] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:20.482904 containerd[1636]: 2025-10-29 00:42:20.419 [INFO][4290] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" host="localhost" Oct 29 00:42:20.482904 containerd[1636]: 2025-10-29 00:42:20.425 [INFO][4290] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:20.482904 containerd[1636]: 2025-10-29 00:42:20.431 [INFO][4290] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:20.482904 containerd[1636]: 2025-10-29 00:42:20.432 [INFO][4290] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:20.482904 containerd[1636]: 2025-10-29 00:42:20.435 [INFO][4290] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:20.482904 containerd[1636]: 2025-10-29 00:42:20.435 [INFO][4290] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" host="localhost" Oct 29 00:42:20.483223 containerd[1636]: 2025-10-29 00:42:20.437 [INFO][4290] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e Oct 29 00:42:20.483223 containerd[1636]: 2025-10-29 00:42:20.441 [INFO][4290] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" host="localhost" Oct 29 00:42:20.483223 containerd[1636]: 2025-10-29 00:42:20.448 [INFO][4290] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" host="localhost" Oct 29 00:42:20.483223 containerd[1636]: 2025-10-29 00:42:20.448 [INFO][4290] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" host="localhost" Oct 29 00:42:20.483223 containerd[1636]: 2025-10-29 00:42:20.448 [INFO][4290] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:20.483223 containerd[1636]: 2025-10-29 00:42:20.448 [INFO][4290] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" HandleID="k8s-pod-network.063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" Workload="localhost-k8s-calico--apiserver--bfff496d5--c6zrj-eth0" Oct 29 00:42:20.483448 containerd[1636]: 2025-10-29 00:42:20.452 [INFO][4262] cni-plugin/k8s.go 418: Populated endpoint ContainerID="063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-c6zrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--c6zrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bfff496d5--c6zrj-eth0", GenerateName:"calico-apiserver-bfff496d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c784c32-7754-4ac4-867b-3b42e31408ba", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bfff496d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bfff496d5-c6zrj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie83c0dca502", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:20.483530 containerd[1636]: 2025-10-29 00:42:20.452 [INFO][4262] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-c6zrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--c6zrj-eth0" Oct 29 00:42:20.483530 containerd[1636]: 2025-10-29 00:42:20.452 [INFO][4262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie83c0dca502 ContainerID="063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-c6zrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--c6zrj-eth0" Oct 29 00:42:20.483530 containerd[1636]: 2025-10-29 00:42:20.456 [INFO][4262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-c6zrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--c6zrj-eth0" Oct 29 00:42:20.483631 containerd[1636]: 2025-10-29 00:42:20.457 [INFO][4262] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-c6zrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--c6zrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bfff496d5--c6zrj-eth0", GenerateName:"calico-apiserver-bfff496d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c784c32-7754-4ac4-867b-3b42e31408ba", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bfff496d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e", Pod:"calico-apiserver-bfff496d5-c6zrj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie83c0dca502", MAC:"66:e6:49:63:3a:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:20.483709 containerd[1636]: 2025-10-29 00:42:20.477 [INFO][4262] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-c6zrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--c6zrj-eth0" Oct 29 00:42:20.582341 containerd[1636]: time="2025-10-29T00:42:20.582166119Z" level=info msg="connecting to shim 063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e" address="unix:///run/containerd/s/0e1488b1730f38ecb18873b02cc6663a493964f3dee0d918119c4d83c7efdb0a" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:20.606519 systemd-networkd[1520]: cali073bd5c5dcd: Link UP Oct 29 00:42:20.607494 systemd-networkd[1520]: cali073bd5c5dcd: Gained carrier Oct 29 00:42:20.615590 systemd[1]: Started cri-containerd-063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e.scope - libcontainer container 063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e. Oct 29 00:42:20.633492 containerd[1636]: 2025-10-29 00:42:20.384 [INFO][4271] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--lp2g2-eth0 coredns-668d6bf9bc- kube-system e17e9d9b-8d8f-42d8-ac35-8c432eed92e3 837 0 2025-10-29 00:41:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-lp2g2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali073bd5c5dcd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" Namespace="kube-system" Pod="coredns-668d6bf9bc-lp2g2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lp2g2-" Oct 29 00:42:20.633492 containerd[1636]: 2025-10-29 00:42:20.384 [INFO][4271] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" Namespace="kube-system" Pod="coredns-668d6bf9bc-lp2g2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lp2g2-eth0" Oct 29 00:42:20.633492 containerd[1636]: 2025-10-29 00:42:20.429 [INFO][4296] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" HandleID="k8s-pod-network.4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" Workload="localhost-k8s-coredns--668d6bf9bc--lp2g2-eth0" Oct 29 00:42:20.633731 containerd[1636]: 2025-10-29 00:42:20.429 [INFO][4296] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" HandleID="k8s-pod-network.4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" Workload="localhost-k8s-coredns--668d6bf9bc--lp2g2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dec00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-lp2g2", "timestamp":"2025-10-29 00:42:20.429192144 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:20.633731 containerd[1636]: 2025-10-29 00:42:20.429 [INFO][4296] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:20.633731 containerd[1636]: 2025-10-29 00:42:20.449 [INFO][4296] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:20.633731 containerd[1636]: 2025-10-29 00:42:20.449 [INFO][4296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:20.633731 containerd[1636]: 2025-10-29 00:42:20.521 [INFO][4296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" host="localhost" Oct 29 00:42:20.633731 containerd[1636]: 2025-10-29 00:42:20.564 [INFO][4296] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:20.633731 containerd[1636]: 2025-10-29 00:42:20.570 [INFO][4296] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:20.633731 containerd[1636]: 2025-10-29 00:42:20.573 [INFO][4296] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:20.633731 containerd[1636]: 2025-10-29 00:42:20.578 [INFO][4296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:20.633731 containerd[1636]: 2025-10-29 00:42:20.578 [INFO][4296] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" host="localhost" Oct 29 00:42:20.633958 containerd[1636]: 2025-10-29 00:42:20.581 [INFO][4296] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b Oct 29 00:42:20.633958 containerd[1636]: 2025-10-29 00:42:20.588 [INFO][4296] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" host="localhost" Oct 29 00:42:20.633958 containerd[1636]: 2025-10-29 00:42:20.596 [INFO][4296] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" host="localhost" Oct 29 00:42:20.633958 containerd[1636]: 2025-10-29 00:42:20.596 [INFO][4296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" host="localhost" Oct 29 00:42:20.633958 containerd[1636]: 2025-10-29 00:42:20.596 [INFO][4296] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:20.633958 containerd[1636]: 2025-10-29 00:42:20.596 [INFO][4296] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" HandleID="k8s-pod-network.4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" Workload="localhost-k8s-coredns--668d6bf9bc--lp2g2-eth0" Oct 29 00:42:20.634174 containerd[1636]: 2025-10-29 00:42:20.603 [INFO][4271] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" Namespace="kube-system" Pod="coredns-668d6bf9bc-lp2g2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lp2g2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lp2g2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e17e9d9b-8d8f-42d8-ac35-8c432eed92e3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-lp2g2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali073bd5c5dcd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:20.634237 containerd[1636]: 2025-10-29 00:42:20.603 [INFO][4271] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" Namespace="kube-system" Pod="coredns-668d6bf9bc-lp2g2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lp2g2-eth0" Oct 29 00:42:20.634237 containerd[1636]: 2025-10-29 00:42:20.603 [INFO][4271] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali073bd5c5dcd ContainerID="4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" Namespace="kube-system" Pod="coredns-668d6bf9bc-lp2g2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lp2g2-eth0" Oct 29 00:42:20.634237 containerd[1636]: 2025-10-29 00:42:20.609 [INFO][4271] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" Namespace="kube-system" Pod="coredns-668d6bf9bc-lp2g2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lp2g2-eth0" Oct 29 00:42:20.634302 containerd[1636]: 2025-10-29 00:42:20.609 [INFO][4271] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" Namespace="kube-system" Pod="coredns-668d6bf9bc-lp2g2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lp2g2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lp2g2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e17e9d9b-8d8f-42d8-ac35-8c432eed92e3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b", Pod:"coredns-668d6bf9bc-lp2g2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali073bd5c5dcd", MAC:"0a:ea:64:96:40:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:20.634302 containerd[1636]: 2025-10-29 00:42:20.626 [INFO][4271] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" Namespace="kube-system" Pod="coredns-668d6bf9bc-lp2g2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lp2g2-eth0" Oct 29 00:42:20.637166 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:20.657405 containerd[1636]: time="2025-10-29T00:42:20.656601208Z" level=info msg="connecting to shim 4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b" address="unix:///run/containerd/s/b0b90bf6d1928e6bfeb57252d9dd01da51abbe95b9c5982fb103c2c905a3f044" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:20.675746 containerd[1636]: time="2025-10-29T00:42:20.675696461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfff496d5-c6zrj,Uid:6c784c32-7754-4ac4-867b-3b42e31408ba,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"063a8281892ca3c6a9fb3728cce8d0e8c5776bc1c38d9ed6e3c9ae8d8f3fad2e\"" Oct 29 00:42:20.680525 systemd[1]: Started cri-containerd-4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b.scope - libcontainer container 4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b. Oct 29 00:42:20.681530 containerd[1636]: time="2025-10-29T00:42:20.681495592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:42:20.696906 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:20.734006 containerd[1636]: time="2025-10-29T00:42:20.733948085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lp2g2,Uid:e17e9d9b-8d8f-42d8-ac35-8c432eed92e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b\"" Oct 29 00:42:20.734930 kubelet[2791]: E1029 00:42:20.734880 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:20.740769 containerd[1636]: time="2025-10-29T00:42:20.740721777Z" level=info msg="CreateContainer within sandbox \"4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 00:42:20.754449 containerd[1636]: time="2025-10-29T00:42:20.754368959Z" level=info msg="Container afb53b4a5ac441dc1449569e0e654f58ec30a3a1fb62b12bb7efe7379a45ac30: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:42:20.760742 containerd[1636]: time="2025-10-29T00:42:20.760692535Z" level=info msg="CreateContainer within sandbox \"4281d04601117e6493a9b2da918217bedfb6f6404e7b3926f222de63e38a360b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"afb53b4a5ac441dc1449569e0e654f58ec30a3a1fb62b12bb7efe7379a45ac30\"" Oct 29 00:42:20.761253 containerd[1636]: time="2025-10-29T00:42:20.761229854Z" level=info msg="StartContainer for \"afb53b4a5ac441dc1449569e0e654f58ec30a3a1fb62b12bb7efe7379a45ac30\"" Oct 29 00:42:20.762845 containerd[1636]: time="2025-10-29T00:42:20.762817305Z" level=info msg="connecting to shim afb53b4a5ac441dc1449569e0e654f58ec30a3a1fb62b12bb7efe7379a45ac30" address="unix:///run/containerd/s/b0b90bf6d1928e6bfeb57252d9dd01da51abbe95b9c5982fb103c2c905a3f044" protocol=ttrpc version=3 Oct 29 00:42:20.782875 systemd[1]: Started cri-containerd-afb53b4a5ac441dc1449569e0e654f58ec30a3a1fb62b12bb7efe7379a45ac30.scope - libcontainer container afb53b4a5ac441dc1449569e0e654f58ec30a3a1fb62b12bb7efe7379a45ac30. Oct 29 00:42:20.823276 containerd[1636]: time="2025-10-29T00:42:20.823210331Z" level=info msg="StartContainer for \"afb53b4a5ac441dc1449569e0e654f58ec30a3a1fb62b12bb7efe7379a45ac30\" returns successfully" Oct 29 00:42:21.003125 containerd[1636]: time="2025-10-29T00:42:21.002956337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:21.139976 containerd[1636]: time="2025-10-29T00:42:21.139914328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:21.139976 containerd[1636]: time="2025-10-29T00:42:21.139948762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:42:21.140356 kubelet[2791]: E1029 00:42:21.140286 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:21.140493 kubelet[2791]: E1029 00:42:21.140359 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:21.140663 kubelet[2791]: E1029 00:42:21.140583 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp6kh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bfff496d5-c6zrj_calico-apiserver(6c784c32-7754-4ac4-867b-3b42e31408ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:21.141848 kubelet[2791]: E1029 00:42:21.141808 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bfff496d5-c6zrj" podUID="6c784c32-7754-4ac4-867b-3b42e31408ba" Oct 29 00:42:21.328884 containerd[1636]: time="2025-10-29T00:42:21.328683598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6cdb67dc-b8vlf,Uid:8171a21c-8e10-436f-a7d7-4945a41db439,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:21.328884 containerd[1636]: time="2025-10-29T00:42:21.328683678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfff496d5-kp5mz,Uid:f8b0c773-fb85-46da-8c2b-d019ba347f69,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:42:21.329094 containerd[1636]: time="2025-10-29T00:42:21.328681754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-859b7,Uid:8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:21.463658 systemd-networkd[1520]: cali880231fe28a: Link UP Oct 29 00:42:21.464283 systemd-networkd[1520]: cali880231fe28a: Gained carrier Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.386 [INFO][4463] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bfff496d5--kp5mz-eth0 calico-apiserver-bfff496d5- calico-apiserver f8b0c773-fb85-46da-8c2b-d019ba347f69 826 0 2025-10-29 00:41:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bfff496d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bfff496d5-kp5mz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali880231fe28a [] [] }} ContainerID="24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-kp5mz" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--kp5mz-" Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.387 [INFO][4463] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-kp5mz" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--kp5mz-eth0" Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.422 [INFO][4501] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" HandleID="k8s-pod-network.24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" Workload="localhost-k8s-calico--apiserver--bfff496d5--kp5mz-eth0" Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.422 [INFO][4501] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" HandleID="k8s-pod-network.24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" Workload="localhost-k8s-calico--apiserver--bfff496d5--kp5mz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1540), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bfff496d5-kp5mz", "timestamp":"2025-10-29 00:42:21.422052992 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.422 [INFO][4501] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.422 [INFO][4501] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.422 [INFO][4501] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.431 [INFO][4501] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" host="localhost" Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.436 [INFO][4501] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.440 [INFO][4501] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.442 [INFO][4501] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.444 [INFO][4501] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.444 [INFO][4501] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" host="localhost" Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.445 [INFO][4501] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.449 [INFO][4501] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" host="localhost" Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.456 [INFO][4501] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" host="localhost" Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.456 [INFO][4501] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" host="localhost" Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.456 [INFO][4501] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:21.479648 containerd[1636]: 2025-10-29 00:42:21.456 [INFO][4501] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" HandleID="k8s-pod-network.24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" Workload="localhost-k8s-calico--apiserver--bfff496d5--kp5mz-eth0" Oct 29 00:42:21.480725 containerd[1636]: 2025-10-29 00:42:21.460 [INFO][4463] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-kp5mz" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--kp5mz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bfff496d5--kp5mz-eth0", GenerateName:"calico-apiserver-bfff496d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8b0c773-fb85-46da-8c2b-d019ba347f69", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bfff496d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bfff496d5-kp5mz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali880231fe28a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:21.480725 containerd[1636]: 2025-10-29 00:42:21.460 [INFO][4463] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-kp5mz" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--kp5mz-eth0" Oct 29 00:42:21.480725 containerd[1636]: 2025-10-29 00:42:21.460 [INFO][4463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali880231fe28a ContainerID="24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-kp5mz" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--kp5mz-eth0" Oct 29 00:42:21.480725 containerd[1636]: 2025-10-29 00:42:21.464 [INFO][4463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-kp5mz" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--kp5mz-eth0" Oct 29 00:42:21.480725 containerd[1636]: 2025-10-29 00:42:21.464 [INFO][4463] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-kp5mz" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--kp5mz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bfff496d5--kp5mz-eth0", GenerateName:"calico-apiserver-bfff496d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8b0c773-fb85-46da-8c2b-d019ba347f69", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bfff496d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e", Pod:"calico-apiserver-bfff496d5-kp5mz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali880231fe28a", MAC:"56:f7:f5:cd:ca:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:21.480725 containerd[1636]: 2025-10-29 00:42:21.475 [INFO][4463] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" Namespace="calico-apiserver" Pod="calico-apiserver-bfff496d5-kp5mz" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfff496d5--kp5mz-eth0" Oct 29 00:42:21.504162 containerd[1636]: time="2025-10-29T00:42:21.504101806Z" level=info msg="connecting to shim 24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e" address="unix:///run/containerd/s/375d5a805a6c2dc01a71eed335a5e4d03cbc73e12dcf883bfce5838fbf5258f6" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:21.531678 systemd[1]: Started cri-containerd-24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e.scope - libcontainer container 24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e. Oct 29 00:42:21.552628 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:21.572673 systemd-networkd[1520]: calif6e2b286db1: Link UP Oct 29 00:42:21.573234 systemd-networkd[1520]: calif6e2b286db1: Gained carrier Oct 29 00:42:21.593217 kubelet[2791]: E1029 00:42:21.593052 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bfff496d5-c6zrj" podUID="6c784c32-7754-4ac4-867b-3b42e31408ba" Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.386 [INFO][4482] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--859b7-eth0 goldmane-666569f655- calico-system 8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7 828 0 2025-10-29 00:41:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-859b7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif6e2b286db1 [] [] }} ContainerID="3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" Namespace="calico-system" Pod="goldmane-666569f655-859b7" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--859b7-" Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.386 [INFO][4482] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" Namespace="calico-system" Pod="goldmane-666569f655-859b7" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--859b7-eth0" Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.423 [INFO][4499] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" HandleID="k8s-pod-network.3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" Workload="localhost-k8s-goldmane--666569f655--859b7-eth0" Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.423 [INFO][4499] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" HandleID="k8s-pod-network.3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" Workload="localhost-k8s-goldmane--666569f655--859b7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-859b7", "timestamp":"2025-10-29 00:42:21.423430889 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.423 [INFO][4499] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.456 [INFO][4499] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.456 [INFO][4499] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.530 [INFO][4499] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" host="localhost" Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.538 [INFO][4499] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.542 [INFO][4499] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.544 [INFO][4499] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.546 [INFO][4499] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.546 [INFO][4499] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" host="localhost" Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.548 [INFO][4499] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805 Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.552 [INFO][4499] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" host="localhost" Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.559 [INFO][4499] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" host="localhost" Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.559 [INFO][4499] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" host="localhost" Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.559 [INFO][4499] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:21.608209 containerd[1636]: 2025-10-29 00:42:21.559 [INFO][4499] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" HandleID="k8s-pod-network.3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" Workload="localhost-k8s-goldmane--666569f655--859b7-eth0" Oct 29 00:42:21.608827 containerd[1636]: 2025-10-29 00:42:21.568 [INFO][4482] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" Namespace="calico-system" Pod="goldmane-666569f655-859b7" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--859b7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--859b7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-859b7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif6e2b286db1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:21.608827 containerd[1636]: 2025-10-29 00:42:21.568 [INFO][4482] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" Namespace="calico-system" Pod="goldmane-666569f655-859b7" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--859b7-eth0" Oct 29 00:42:21.608827 containerd[1636]: 2025-10-29 00:42:21.568 [INFO][4482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6e2b286db1 ContainerID="3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" Namespace="calico-system" Pod="goldmane-666569f655-859b7" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--859b7-eth0" Oct 29 00:42:21.608827 containerd[1636]: 2025-10-29 00:42:21.573 [INFO][4482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" Namespace="calico-system" Pod="goldmane-666569f655-859b7" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--859b7-eth0" Oct 29 00:42:21.608827 containerd[1636]: 2025-10-29 00:42:21.576 [INFO][4482] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" Namespace="calico-system" Pod="goldmane-666569f655-859b7" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--859b7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--859b7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805", Pod:"goldmane-666569f655-859b7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif6e2b286db1", MAC:"ca:87:f4:bc:d3:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:21.608827 containerd[1636]: 2025-10-29 00:42:21.592 [INFO][4482] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" Namespace="calico-system" Pod="goldmane-666569f655-859b7" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--859b7-eth0" Oct 29 00:42:21.623245 kubelet[2791]: E1029 00:42:21.623199 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:21.626067 containerd[1636]: time="2025-10-29T00:42:21.626001237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfff496d5-kp5mz,Uid:f8b0c773-fb85-46da-8c2b-d019ba347f69,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"24ef605c138146d8d830fd85eacd3962b737e4035f49f7e84151724ee26dda3e\"" Oct 29 00:42:21.631642 containerd[1636]: time="2025-10-29T00:42:21.631369308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:42:21.652761 kubelet[2791]: I1029 00:42:21.652609 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lp2g2" podStartSLOduration=41.652565494 podStartE2EDuration="41.652565494s" podCreationTimestamp="2025-10-29 00:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:42:21.6524464 +0000 UTC m=+46.825629611" watchObservedRunningTime="2025-10-29 00:42:21.652565494 +0000 UTC m=+46.825748705" Oct 29 00:42:21.653801 containerd[1636]: time="2025-10-29T00:42:21.653743877Z" level=info msg="connecting to shim 3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805" address="unix:///run/containerd/s/fd47bb3a2d6be775277708bc8df887c3967987233d7a3faa4ba99105cf711d07" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:21.689409 systemd-networkd[1520]: calif4926a846d8: Link UP Oct 29 00:42:21.690732 systemd-networkd[1520]: calif4926a846d8: Gained carrier Oct 29 00:42:21.691609 systemd[1]: Started cri-containerd-3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805.scope - libcontainer container 3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805. Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.397 [INFO][4458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f6cdb67dc--b8vlf-eth0 calico-kube-controllers-7f6cdb67dc- calico-system 8171a21c-8e10-436f-a7d7-4945a41db439 835 0 2025-10-29 00:41:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f6cdb67dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f6cdb67dc-b8vlf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif4926a846d8 [] [] }} ContainerID="750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" Namespace="calico-system" Pod="calico-kube-controllers-7f6cdb67dc-b8vlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6cdb67dc--b8vlf-" Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.398 [INFO][4458] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" Namespace="calico-system" Pod="calico-kube-controllers-7f6cdb67dc-b8vlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6cdb67dc--b8vlf-eth0" Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.430 [INFO][4513] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" HandleID="k8s-pod-network.750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" Workload="localhost-k8s-calico--kube--controllers--7f6cdb67dc--b8vlf-eth0" Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.431 [INFO][4513] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" HandleID="k8s-pod-network.750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" Workload="localhost-k8s-calico--kube--controllers--7f6cdb67dc--b8vlf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003241f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f6cdb67dc-b8vlf", "timestamp":"2025-10-29 00:42:21.4308386 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.431 [INFO][4513] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.559 [INFO][4513] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.559 [INFO][4513] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.632 [INFO][4513] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" host="localhost" Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.639 [INFO][4513] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.652 [INFO][4513] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.656 [INFO][4513] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.661 [INFO][4513] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.661 [INFO][4513] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" host="localhost" Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.664 [INFO][4513] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.668 [INFO][4513] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" host="localhost" Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.679 [INFO][4513] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" host="localhost" Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.679 [INFO][4513] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" host="localhost" Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.679 [INFO][4513] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:21.716280 containerd[1636]: 2025-10-29 00:42:21.679 [INFO][4513] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" HandleID="k8s-pod-network.750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" Workload="localhost-k8s-calico--kube--controllers--7f6cdb67dc--b8vlf-eth0" Oct 29 00:42:21.717132 containerd[1636]: 2025-10-29 00:42:21.684 [INFO][4458] cni-plugin/k8s.go 418: Populated endpoint ContainerID="750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" Namespace="calico-system" Pod="calico-kube-controllers-7f6cdb67dc-b8vlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6cdb67dc--b8vlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f6cdb67dc--b8vlf-eth0", GenerateName:"calico-kube-controllers-7f6cdb67dc-", Namespace:"calico-system", SelfLink:"", UID:"8171a21c-8e10-436f-a7d7-4945a41db439", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f6cdb67dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f6cdb67dc-b8vlf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4926a846d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:21.717132 containerd[1636]: 2025-10-29 00:42:21.685 [INFO][4458] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" Namespace="calico-system" Pod="calico-kube-controllers-7f6cdb67dc-b8vlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6cdb67dc--b8vlf-eth0" Oct 29 00:42:21.717132 containerd[1636]: 2025-10-29 00:42:21.685 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4926a846d8 ContainerID="750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" Namespace="calico-system" Pod="calico-kube-controllers-7f6cdb67dc-b8vlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6cdb67dc--b8vlf-eth0" Oct 29 00:42:21.717132 containerd[1636]: 2025-10-29 00:42:21.691 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" Namespace="calico-system" Pod="calico-kube-controllers-7f6cdb67dc-b8vlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6cdb67dc--b8vlf-eth0" Oct 29 00:42:21.717132 containerd[1636]: 2025-10-29 00:42:21.693 [INFO][4458] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" Namespace="calico-system" Pod="calico-kube-controllers-7f6cdb67dc-b8vlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6cdb67dc--b8vlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f6cdb67dc--b8vlf-eth0", GenerateName:"calico-kube-controllers-7f6cdb67dc-", Namespace:"calico-system", SelfLink:"", UID:"8171a21c-8e10-436f-a7d7-4945a41db439", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f6cdb67dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da", Pod:"calico-kube-controllers-7f6cdb67dc-b8vlf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4926a846d8", MAC:"b2:bc:e0:b4:35:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:21.717132 containerd[1636]: 2025-10-29 00:42:21.709 [INFO][4458] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" Namespace="calico-system" Pod="calico-kube-controllers-7f6cdb67dc-b8vlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6cdb67dc--b8vlf-eth0" Oct 29 00:42:21.720484 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:21.743567 containerd[1636]: time="2025-10-29T00:42:21.743498592Z" level=info msg="connecting to shim 750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da" address="unix:///run/containerd/s/b321ac071816681527b5e04a59c43ccf0d3491cea35eb55254a8aeaa659fd42b" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:21.758065 containerd[1636]: time="2025-10-29T00:42:21.758018690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-859b7,Uid:8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b5c4a65dd206cfc6b8a8b417b78d5e188df614f3a7ab945ab8af050e8d14805\"" Oct 29 00:42:21.762786 systemd-networkd[1520]: calie83c0dca502: Gained IPv6LL Oct 29 00:42:21.774568 systemd[1]: Started cri-containerd-750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da.scope - libcontainer container 750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da. Oct 29 00:42:21.790336 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:21.821783 containerd[1636]: time="2025-10-29T00:42:21.821718866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6cdb67dc-b8vlf,Uid:8171a21c-8e10-436f-a7d7-4945a41db439,Namespace:calico-system,Attempt:0,} returns sandbox id \"750f1af5f7ad436398a618be77e19b6013c5183fa45fc83f963539763f5de8da\"" Oct 29 00:42:21.984140 containerd[1636]: time="2025-10-29T00:42:21.983988982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:21.985309 containerd[1636]: time="2025-10-29T00:42:21.985254509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:42:21.985422 containerd[1636]: time="2025-10-29T00:42:21.985360648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:21.985671 kubelet[2791]: E1029 00:42:21.985600 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:21.986427 kubelet[2791]: E1029 00:42:21.985790 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:21.986427 kubelet[2791]: E1029 00:42:21.986110 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nktq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bfff496d5-kp5mz_calico-apiserver(f8b0c773-fb85-46da-8c2b-d019ba347f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:21.986596 containerd[1636]: time="2025-10-29T00:42:21.986444805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:42:21.987324 kubelet[2791]: E1029 00:42:21.987253 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bfff496d5-kp5mz" podUID="f8b0c773-fb85-46da-8c2b-d019ba347f69" Oct 29 00:42:22.329243 containerd[1636]: time="2025-10-29T00:42:22.329156169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgj2m,Uid:384bfac2-527a-4555-ad7a-580a89495c1d,Namespace:calico-system,Attempt:0,}" Oct 29 00:42:22.358819 containerd[1636]: time="2025-10-29T00:42:22.358722829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:22.360394 containerd[1636]: time="2025-10-29T00:42:22.360030975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:42:22.360394 containerd[1636]: time="2025-10-29T00:42:22.360081440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:22.360545 kubelet[2791]: E1029 00:42:22.360354 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:42:22.360545 kubelet[2791]: E1029 00:42:22.360424 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:42:22.361095 kubelet[2791]: E1029 00:42:22.361009 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wc26h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-859b7_calico-system(8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:22.361235 containerd[1636]: time="2025-10-29T00:42:22.361172379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:42:22.362357 kubelet[2791]: E1029 00:42:22.362285 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-859b7" podUID="8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7" Oct 29 00:42:22.402615 systemd-networkd[1520]: cali073bd5c5dcd: Gained IPv6LL Oct 29 00:42:22.449157 systemd-networkd[1520]: calie3804b4f85a: Link UP Oct 29 00:42:22.449605 systemd-networkd[1520]: calie3804b4f85a: Gained carrier Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.374 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zgj2m-eth0 csi-node-driver- calico-system 384bfac2-527a-4555-ad7a-580a89495c1d 710 0 2025-10-29 00:41:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zgj2m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie3804b4f85a [] [] }} ContainerID="5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" Namespace="calico-system" Pod="csi-node-driver-zgj2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgj2m-" Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.374 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" Namespace="calico-system" Pod="csi-node-driver-zgj2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgj2m-eth0" Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.410 [INFO][4718] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" HandleID="k8s-pod-network.5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" Workload="localhost-k8s-csi--node--driver--zgj2m-eth0" Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.411 [INFO][4718] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" HandleID="k8s-pod-network.5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" Workload="localhost-k8s-csi--node--driver--zgj2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zgj2m", "timestamp":"2025-10-29 00:42:22.410676056 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.411 [INFO][4718] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.411 [INFO][4718] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.411 [INFO][4718] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.419 [INFO][4718] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" host="localhost" Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.424 [INFO][4718] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.428 [INFO][4718] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.430 [INFO][4718] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.432 [INFO][4718] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.432 [INFO][4718] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" host="localhost" Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.433 [INFO][4718] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.437 [INFO][4718] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" host="localhost" Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.443 [INFO][4718] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" host="localhost" Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.443 [INFO][4718] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" host="localhost" Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.443 [INFO][4718] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:22.464730 containerd[1636]: 2025-10-29 00:42:22.443 [INFO][4718] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" HandleID="k8s-pod-network.5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" Workload="localhost-k8s-csi--node--driver--zgj2m-eth0" Oct 29 00:42:22.465309 containerd[1636]: 2025-10-29 00:42:22.447 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" Namespace="calico-system" Pod="csi-node-driver-zgj2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgj2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zgj2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"384bfac2-527a-4555-ad7a-580a89495c1d", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zgj2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie3804b4f85a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:22.465309 containerd[1636]: 2025-10-29 00:42:22.447 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" Namespace="calico-system" Pod="csi-node-driver-zgj2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgj2m-eth0" Oct 29 00:42:22.465309 containerd[1636]: 2025-10-29 00:42:22.447 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3804b4f85a ContainerID="5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" Namespace="calico-system" Pod="csi-node-driver-zgj2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgj2m-eth0" Oct 29 00:42:22.465309 containerd[1636]: 2025-10-29 00:42:22.449 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" Namespace="calico-system" Pod="csi-node-driver-zgj2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgj2m-eth0" Oct 29 00:42:22.465309 containerd[1636]: 2025-10-29 00:42:22.450 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" Namespace="calico-system" Pod="csi-node-driver-zgj2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgj2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zgj2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"384bfac2-527a-4555-ad7a-580a89495c1d", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d", Pod:"csi-node-driver-zgj2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie3804b4f85a", MAC:"16:63:13:b3:a3:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:22.465309 containerd[1636]: 2025-10-29 00:42:22.459 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" Namespace="calico-system" Pod="csi-node-driver-zgj2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--zgj2m-eth0" Oct 29 00:42:22.489431 containerd[1636]: time="2025-10-29T00:42:22.489359867Z" level=info msg="connecting to shim 5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d" address="unix:///run/containerd/s/ef708a0971f6ef385dfff0554ae461cce62819d083562a2107dd315071d66b01" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:22.521560 systemd[1]: Started cri-containerd-5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d.scope - libcontainer container 5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d. Oct 29 00:42:22.544067 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:22.565329 containerd[1636]: time="2025-10-29T00:42:22.565260634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgj2m,Uid:384bfac2-527a-4555-ad7a-580a89495c1d,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b26f89bc8d19704fda24f146f5c79deb10e0ba764ad0f45e36c15e80d02327d\"" Oct 29 00:42:22.628400 kubelet[2791]: E1029 00:42:22.628193 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bfff496d5-kp5mz" podUID="f8b0c773-fb85-46da-8c2b-d019ba347f69" Oct 29 00:42:22.631614 kubelet[2791]: E1029 00:42:22.631588 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:22.632186 kubelet[2791]: E1029 00:42:22.632127 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bfff496d5-c6zrj" podUID="6c784c32-7754-4ac4-867b-3b42e31408ba" Oct 29 00:42:22.632696 kubelet[2791]: E1029 00:42:22.632641 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-859b7" podUID="8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7" Oct 29 00:42:22.713599 containerd[1636]: time="2025-10-29T00:42:22.713481592Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:22.714751 containerd[1636]: time="2025-10-29T00:42:22.714707363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:42:22.714915 containerd[1636]: time="2025-10-29T00:42:22.714751015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:42:22.715026 kubelet[2791]: E1029 00:42:22.714965 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:42:22.715084 kubelet[2791]: E1029 00:42:22.715039 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:42:22.715399 containerd[1636]: time="2025-10-29T00:42:22.715362945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 00:42:22.715437 kubelet[2791]: E1029 00:42:22.715346 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85b4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f6cdb67dc-b8vlf_calico-system(8171a21c-8e10-436f-a7d7-4945a41db439): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:22.716647 kubelet[2791]: E1029 00:42:22.716586 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f6cdb67dc-b8vlf" podUID="8171a21c-8e10-436f-a7d7-4945a41db439" Oct 29 00:42:22.914612 systemd-networkd[1520]: calif6e2b286db1: Gained IPv6LL Oct 29 00:42:23.042608 systemd-networkd[1520]: calif4926a846d8: Gained IPv6LL Oct 29 00:42:23.106267 containerd[1636]: time="2025-10-29T00:42:23.106181125Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:23.107559 containerd[1636]: time="2025-10-29T00:42:23.107501975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 00:42:23.107813 containerd[1636]: time="2025-10-29T00:42:23.107591443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 00:42:23.107859 kubelet[2791]: E1029 00:42:23.107783 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:42:23.107911 kubelet[2791]: E1029 00:42:23.107847 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:42:23.108072 kubelet[2791]: E1029 00:42:23.108013 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrvg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgj2m_calico-system(384bfac2-527a-4555-ad7a-580a89495c1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:23.110462 containerd[1636]: time="2025-10-29T00:42:23.110413952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 00:42:23.328626 kubelet[2791]: E1029 00:42:23.328571 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:23.329414 containerd[1636]: time="2025-10-29T00:42:23.329124602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jb77m,Uid:732e008d-cf68-4a53-8651-e4c01c54044b,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:23.362649 systemd-networkd[1520]: cali880231fe28a: Gained IPv6LL Oct 29 00:42:23.485938 containerd[1636]: time="2025-10-29T00:42:23.485883018Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:23.524843 containerd[1636]: time="2025-10-29T00:42:23.524744205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 00:42:23.524843 containerd[1636]: time="2025-10-29T00:42:23.524816801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 00:42:23.525539 kubelet[2791]: E1029 00:42:23.525071 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:42:23.525539 kubelet[2791]: E1029 00:42:23.525129 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:42:23.525539 kubelet[2791]: E1029 00:42:23.525292 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrvg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgj2m_calico-system(384bfac2-527a-4555-ad7a-580a89495c1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:23.527171 kubelet[2791]: E1029 00:42:23.527070 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgj2m" podUID="384bfac2-527a-4555-ad7a-580a89495c1d" Oct 29 00:42:23.536662 systemd-networkd[1520]: cali2405dc98bd4: Link UP Oct 29 00:42:23.538058 systemd-networkd[1520]: cali2405dc98bd4: Gained carrier Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.435 [INFO][4782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--jb77m-eth0 coredns-668d6bf9bc- kube-system 732e008d-cf68-4a53-8651-e4c01c54044b 838 0 2025-10-29 00:41:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-jb77m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2405dc98bd4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-jb77m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jb77m-" Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.436 [INFO][4782] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-jb77m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jb77m-eth0" Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.467 [INFO][4798] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" HandleID="k8s-pod-network.698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" Workload="localhost-k8s-coredns--668d6bf9bc--jb77m-eth0" Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.467 [INFO][4798] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" HandleID="k8s-pod-network.698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" Workload="localhost-k8s-coredns--668d6bf9bc--jb77m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135890), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-jb77m", "timestamp":"2025-10-29 00:42:23.467686011 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.467 [INFO][4798] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.468 [INFO][4798] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.468 [INFO][4798] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.475 [INFO][4798] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" host="localhost" Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.479 [INFO][4798] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.484 [INFO][4798] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.485 [INFO][4798] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.488 [INFO][4798] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.488 [INFO][4798] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" host="localhost" Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.489 [INFO][4798] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1 Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.498 [INFO][4798] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" host="localhost" Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.528 [INFO][4798] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" host="localhost" Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.528 [INFO][4798] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" host="localhost" Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.528 [INFO][4798] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:42:23.558720 containerd[1636]: 2025-10-29 00:42:23.528 [INFO][4798] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" HandleID="k8s-pod-network.698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" Workload="localhost-k8s-coredns--668d6bf9bc--jb77m-eth0" Oct 29 00:42:23.559256 containerd[1636]: 2025-10-29 00:42:23.532 [INFO][4782] cni-plugin/k8s.go 418: Populated endpoint ContainerID="698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-jb77m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jb77m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jb77m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"732e008d-cf68-4a53-8651-e4c01c54044b", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-jb77m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2405dc98bd4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:23.559256 containerd[1636]: 2025-10-29 00:42:23.532 [INFO][4782] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-jb77m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jb77m-eth0" Oct 29 00:42:23.559256 containerd[1636]: 2025-10-29 00:42:23.532 [INFO][4782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2405dc98bd4 ContainerID="698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-jb77m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jb77m-eth0" Oct 29 00:42:23.559256 containerd[1636]: 2025-10-29 00:42:23.538 [INFO][4782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-jb77m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jb77m-eth0" Oct 29 00:42:23.559256 containerd[1636]: 2025-10-29 00:42:23.538 [INFO][4782] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-jb77m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jb77m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jb77m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"732e008d-cf68-4a53-8651-e4c01c54044b", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 41, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1", Pod:"coredns-668d6bf9bc-jb77m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2405dc98bd4", MAC:"82:72:d9:c2:1e:4b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:42:23.559256 containerd[1636]: 2025-10-29 00:42:23.552 [INFO][4782] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-jb77m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jb77m-eth0" Oct 29 00:42:23.585748 containerd[1636]: time="2025-10-29T00:42:23.585498820Z" level=info msg="connecting to shim 698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1" address="unix:///run/containerd/s/6e2733680cdcd7992379526de6cb9342482cd4c84993a59b7ccbde5275c6b3b6" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:42:23.621617 systemd[1]: Started cri-containerd-698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1.scope - libcontainer container 698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1. Oct 29 00:42:23.637695 kubelet[2791]: E1029 00:42:23.637646 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:23.640560 kubelet[2791]: E1029 00:42:23.640509 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-859b7" podUID="8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7" Oct 29 00:42:23.640890 kubelet[2791]: E1029 00:42:23.640853 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f6cdb67dc-b8vlf" podUID="8171a21c-8e10-436f-a7d7-4945a41db439" Oct 29 00:42:23.641549 kubelet[2791]: E1029 00:42:23.641423 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgj2m" podUID="384bfac2-527a-4555-ad7a-580a89495c1d" Oct 29 00:42:23.642275 kubelet[2791]: E1029 00:42:23.642224 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bfff496d5-kp5mz" podUID="f8b0c773-fb85-46da-8c2b-d019ba347f69" Oct 29 00:42:23.648333 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:23.693844 containerd[1636]: time="2025-10-29T00:42:23.693705083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jb77m,Uid:732e008d-cf68-4a53-8651-e4c01c54044b,Namespace:kube-system,Attempt:0,} returns sandbox id \"698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1\"" Oct 29 00:42:23.695588 kubelet[2791]: E1029 00:42:23.695552 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:23.698981 containerd[1636]: time="2025-10-29T00:42:23.698921067Z" level=info msg="CreateContainer within sandbox \"698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 00:42:23.716442 containerd[1636]: time="2025-10-29T00:42:23.715581740Z" level=info msg="Container d7086344fb299e864b8cbbd4ce478157536a65df3f103361da71b058a5ba061c: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:42:23.722609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1806546150.mount: Deactivated successfully. Oct 29 00:42:23.722884 containerd[1636]: time="2025-10-29T00:42:23.722844736Z" level=info msg="CreateContainer within sandbox \"698bf7edd2b1552f76f25bcdf6aff1bf5255af2c9bbb74ec50d8ea33d81bbcd1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7086344fb299e864b8cbbd4ce478157536a65df3f103361da71b058a5ba061c\"" Oct 29 00:42:23.724227 containerd[1636]: time="2025-10-29T00:42:23.724093411Z" level=info msg="StartContainer for \"d7086344fb299e864b8cbbd4ce478157536a65df3f103361da71b058a5ba061c\"" Oct 29 00:42:23.725219 containerd[1636]: time="2025-10-29T00:42:23.725190451Z" level=info msg="connecting to shim d7086344fb299e864b8cbbd4ce478157536a65df3f103361da71b058a5ba061c" address="unix:///run/containerd/s/6e2733680cdcd7992379526de6cb9342482cd4c84993a59b7ccbde5275c6b3b6" protocol=ttrpc version=3 Oct 29 00:42:23.760645 systemd[1]: Started cri-containerd-d7086344fb299e864b8cbbd4ce478157536a65df3f103361da71b058a5ba061c.scope - libcontainer container d7086344fb299e864b8cbbd4ce478157536a65df3f103361da71b058a5ba061c. Oct 29 00:42:23.811090 containerd[1636]: time="2025-10-29T00:42:23.811019167Z" level=info msg="StartContainer for \"d7086344fb299e864b8cbbd4ce478157536a65df3f103361da71b058a5ba061c\" returns successfully" Oct 29 00:42:23.938636 systemd-networkd[1520]: calie3804b4f85a: Gained IPv6LL Oct 29 00:42:24.644410 kubelet[2791]: E1029 00:42:24.644346 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:24.654886 kubelet[2791]: I1029 00:42:24.654822 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jb77m" podStartSLOduration=44.654800231 podStartE2EDuration="44.654800231s" podCreationTimestamp="2025-10-29 00:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:42:24.65409667 +0000 UTC m=+49.827279881" watchObservedRunningTime="2025-10-29 00:42:24.654800231 +0000 UTC m=+49.827983442" Oct 29 00:42:24.750595 kubelet[2791]: I1029 00:42:24.750245 2791 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:42:24.751224 kubelet[2791]: E1029 00:42:24.751161 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:24.893038 containerd[1636]: time="2025-10-29T00:42:24.892984762Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed20490d18f0039df3a90eab75e00bee32479b32e400f556580bd98ae3bf0f0c\" id:\"814ee6ebc31bf0282081fb9c7578eea710cc379d506cfd446d9b8dbdd79798b2\" pid:4909 exited_at:{seconds:1761698544 nanos:892606632}" Oct 29 00:42:25.029782 systemd[1]: Started sshd@8-10.0.0.76:22-10.0.0.1:33628.service - OpenSSH per-connection server daemon (10.0.0.1:33628). Oct 29 00:42:25.039719 containerd[1636]: time="2025-10-29T00:42:25.039656914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed20490d18f0039df3a90eab75e00bee32479b32e400f556580bd98ae3bf0f0c\" id:\"b42deed58ab427242382c8a687ac84459490ed619cebc48b0527b5fd1bd618f4\" pid:4934 exited_at:{seconds:1761698545 nanos:39292449}" Oct 29 00:42:25.091586 systemd-networkd[1520]: cali2405dc98bd4: Gained IPv6LL Oct 29 00:42:25.116134 sshd[4946]: Accepted publickey for core from 10.0.0.1 port 33628 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:25.118257 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:25.123803 systemd-logind[1612]: New session 9 of user core. Oct 29 00:42:25.129552 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 29 00:42:25.278819 sshd[4949]: Connection closed by 10.0.0.1 port 33628 Oct 29 00:42:25.279167 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:25.283346 systemd[1]: sshd@8-10.0.0.76:22-10.0.0.1:33628.service: Deactivated successfully. Oct 29 00:42:25.285802 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 00:42:25.288340 systemd-logind[1612]: Session 9 logged out. Waiting for processes to exit. Oct 29 00:42:25.289490 systemd-logind[1612]: Removed session 9. Oct 29 00:42:25.643846 kubelet[2791]: E1029 00:42:25.643696 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:25.643846 kubelet[2791]: E1029 00:42:25.643762 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:26.645149 kubelet[2791]: E1029 00:42:26.645097 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:30.297510 systemd[1]: Started sshd@9-10.0.0.76:22-10.0.0.1:51922.service - OpenSSH per-connection server daemon (10.0.0.1:51922). Oct 29 00:42:30.361631 sshd[4976]: Accepted publickey for core from 10.0.0.1 port 51922 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:30.363508 sshd-session[4976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:30.368225 systemd-logind[1612]: New session 10 of user core. Oct 29 00:42:30.377509 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 29 00:42:30.503098 sshd[4979]: Connection closed by 10.0.0.1 port 51922 Oct 29 00:42:30.503523 sshd-session[4976]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:30.509705 systemd[1]: sshd@9-10.0.0.76:22-10.0.0.1:51922.service: Deactivated successfully. Oct 29 00:42:30.511907 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 00:42:30.513022 systemd-logind[1612]: Session 10 logged out. Waiting for processes to exit. Oct 29 00:42:30.514680 systemd-logind[1612]: Removed session 10. Oct 29 00:42:32.329334 containerd[1636]: time="2025-10-29T00:42:32.329182482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:42:32.681961 containerd[1636]: time="2025-10-29T00:42:32.681807269Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:32.723861 containerd[1636]: time="2025-10-29T00:42:32.723778895Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:42:32.724015 containerd[1636]: time="2025-10-29T00:42:32.723833157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:42:32.724105 kubelet[2791]: E1029 00:42:32.724065 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:42:32.724481 kubelet[2791]: E1029 00:42:32.724119 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:42:32.724481 kubelet[2791]: E1029 00:42:32.724267 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:456170bd4865481e98f102db6aff1c52,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xblp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-975dfcd7c-99nd7_calico-system(27ae937a-cd86-446b-82dd-96e1560db234): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:32.726278 containerd[1636]: time="2025-10-29T00:42:32.726249020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:42:33.141488 containerd[1636]: time="2025-10-29T00:42:33.141363578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:33.142740 containerd[1636]: time="2025-10-29T00:42:33.142692331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:42:33.142823 containerd[1636]: time="2025-10-29T00:42:33.142793842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:42:33.143049 kubelet[2791]: E1029 00:42:33.142978 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:42:33.143136 kubelet[2791]: E1029 00:42:33.143059 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:42:33.143259 kubelet[2791]: E1029 00:42:33.143198 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xblp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-975dfcd7c-99nd7_calico-system(27ae937a-cd86-446b-82dd-96e1560db234): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:33.144507 kubelet[2791]: E1029 00:42:33.144441 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-975dfcd7c-99nd7" podUID="27ae937a-cd86-446b-82dd-96e1560db234" Oct 29 00:42:33.332708 containerd[1636]: time="2025-10-29T00:42:33.332645649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:42:33.804655 containerd[1636]: time="2025-10-29T00:42:33.804581704Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:33.805766 containerd[1636]: time="2025-10-29T00:42:33.805726902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:42:33.805840 containerd[1636]: time="2025-10-29T00:42:33.805804718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:33.805964 kubelet[2791]: E1029 00:42:33.805915 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:33.806359 kubelet[2791]: E1029 00:42:33.805964 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:33.806359 kubelet[2791]: E1029 00:42:33.806081 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp6kh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bfff496d5-c6zrj_calico-apiserver(6c784c32-7754-4ac4-867b-3b42e31408ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:33.807309 kubelet[2791]: E1029 00:42:33.807262 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bfff496d5-c6zrj" podUID="6c784c32-7754-4ac4-867b-3b42e31408ba" Oct 29 00:42:35.329244 containerd[1636]: time="2025-10-29T00:42:35.328993403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:42:35.519952 systemd[1]: Started sshd@10-10.0.0.76:22-10.0.0.1:51930.service - OpenSSH per-connection server daemon (10.0.0.1:51930). Oct 29 00:42:35.584218 sshd[4997]: Accepted publickey for core from 10.0.0.1 port 51930 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:35.585478 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:35.590877 systemd-logind[1612]: New session 11 of user core. Oct 29 00:42:35.603632 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 29 00:42:35.700470 containerd[1636]: time="2025-10-29T00:42:35.700410484Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:35.730303 sshd[5000]: Connection closed by 10.0.0.1 port 51930 Oct 29 00:42:35.730754 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:35.732653 containerd[1636]: time="2025-10-29T00:42:35.732548918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:42:35.732653 containerd[1636]: time="2025-10-29T00:42:35.732618359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:35.732882 kubelet[2791]: E1029 00:42:35.732819 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:42:35.732882 kubelet[2791]: E1029 00:42:35.732884 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:42:35.733479 kubelet[2791]: E1029 00:42:35.733055 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wc26h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-859b7_calico-system(8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:35.734468 kubelet[2791]: E1029 00:42:35.734363 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-859b7" podUID="8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7" Oct 29 00:42:35.745566 systemd[1]: sshd@10-10.0.0.76:22-10.0.0.1:51930.service: Deactivated successfully. Oct 29 00:42:35.748023 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 00:42:35.749198 systemd-logind[1612]: Session 11 logged out. Waiting for processes to exit. Oct 29 00:42:35.752555 systemd[1]: Started sshd@11-10.0.0.76:22-10.0.0.1:51932.service - OpenSSH per-connection server daemon (10.0.0.1:51932). Oct 29 00:42:35.753432 systemd-logind[1612]: Removed session 11. Oct 29 00:42:35.819042 sshd[5014]: Accepted publickey for core from 10.0.0.1 port 51932 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:35.820971 sshd-session[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:35.826145 systemd-logind[1612]: New session 12 of user core. Oct 29 00:42:35.837511 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 29 00:42:36.127480 sshd[5018]: Connection closed by 10.0.0.1 port 51932 Oct 29 00:42:36.128711 sshd-session[5014]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:36.144716 systemd[1]: sshd@11-10.0.0.76:22-10.0.0.1:51932.service: Deactivated successfully. Oct 29 00:42:36.148199 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 00:42:36.150916 systemd-logind[1612]: Session 12 logged out. Waiting for processes to exit. Oct 29 00:42:36.153502 systemd[1]: Started sshd@12-10.0.0.76:22-10.0.0.1:51942.service - OpenSSH per-connection server daemon (10.0.0.1:51942). Oct 29 00:42:36.155141 systemd-logind[1612]: Removed session 12. Oct 29 00:42:36.215011 sshd[5029]: Accepted publickey for core from 10.0.0.1 port 51942 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:36.216918 sshd-session[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:36.222360 systemd-logind[1612]: New session 13 of user core. Oct 29 00:42:36.234553 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 29 00:42:36.329528 containerd[1636]: time="2025-10-29T00:42:36.329469876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 00:42:36.381625 sshd[5032]: Connection closed by 10.0.0.1 port 51942 Oct 29 00:42:36.381883 sshd-session[5029]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:36.386743 systemd[1]: sshd@12-10.0.0.76:22-10.0.0.1:51942.service: Deactivated successfully. Oct 29 00:42:36.389037 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 00:42:36.390016 systemd-logind[1612]: Session 13 logged out. Waiting for processes to exit. Oct 29 00:42:36.391346 systemd-logind[1612]: Removed session 13. Oct 29 00:42:36.711307 containerd[1636]: time="2025-10-29T00:42:36.711141222Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:36.887969 containerd[1636]: time="2025-10-29T00:42:36.887873784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 00:42:36.887969 containerd[1636]: time="2025-10-29T00:42:36.887949596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 00:42:36.888298 kubelet[2791]: E1029 00:42:36.888197 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:42:36.888719 kubelet[2791]: E1029 00:42:36.888298 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:42:36.888800 kubelet[2791]: E1029 00:42:36.888713 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrvg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgj2m_calico-system(384bfac2-527a-4555-ad7a-580a89495c1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:36.888978 containerd[1636]: time="2025-10-29T00:42:36.888951947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:42:37.395888 containerd[1636]: time="2025-10-29T00:42:37.395802849Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:37.513588 containerd[1636]: time="2025-10-29T00:42:37.513494712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:42:37.513766 containerd[1636]: time="2025-10-29T00:42:37.513572578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:42:37.513958 kubelet[2791]: E1029 00:42:37.513891 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:37.513958 kubelet[2791]: E1029 00:42:37.513956 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:42:37.514411 kubelet[2791]: E1029 00:42:37.514281 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nktq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bfff496d5-kp5mz_calico-apiserver(f8b0c773-fb85-46da-8c2b-d019ba347f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:37.514707 containerd[1636]: time="2025-10-29T00:42:37.514596129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 00:42:37.515974 kubelet[2791]: E1029 00:42:37.515928 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bfff496d5-kp5mz" podUID="f8b0c773-fb85-46da-8c2b-d019ba347f69" Oct 29 00:42:37.850898 containerd[1636]: time="2025-10-29T00:42:37.850834307Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:37.933508 containerd[1636]: time="2025-10-29T00:42:37.933413839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 00:42:37.933710 containerd[1636]: time="2025-10-29T00:42:37.933437403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 00:42:37.933822 kubelet[2791]: E1029 00:42:37.933750 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:42:37.933822 kubelet[2791]: E1029 00:42:37.933818 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:42:37.934300 kubelet[2791]: E1029 00:42:37.933940 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrvg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgj2m_calico-system(384bfac2-527a-4555-ad7a-580a89495c1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:37.935208 kubelet[2791]: E1029 00:42:37.935157 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgj2m" podUID="384bfac2-527a-4555-ad7a-580a89495c1d" Oct 29 00:42:38.329143 containerd[1636]: time="2025-10-29T00:42:38.329027517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:42:38.670849 containerd[1636]: time="2025-10-29T00:42:38.670681884Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:42:38.672270 containerd[1636]: time="2025-10-29T00:42:38.672117488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:42:38.672270 containerd[1636]: time="2025-10-29T00:42:38.672167742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:42:38.672442 kubelet[2791]: E1029 00:42:38.672372 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:42:38.672531 kubelet[2791]: E1029 00:42:38.672445 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:42:38.672634 kubelet[2791]: E1029 00:42:38.672580 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85b4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f6cdb67dc-b8vlf_calico-system(8171a21c-8e10-436f-a7d7-4945a41db439): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:42:38.673812 kubelet[2791]: E1029 00:42:38.673751 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f6cdb67dc-b8vlf" podUID="8171a21c-8e10-436f-a7d7-4945a41db439" Oct 29 00:42:41.397295 systemd[1]: Started sshd@13-10.0.0.76:22-10.0.0.1:34224.service - OpenSSH per-connection server daemon (10.0.0.1:34224). Oct 29 00:42:41.458807 sshd[5057]: Accepted publickey for core from 10.0.0.1 port 34224 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:41.460231 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:41.464642 systemd-logind[1612]: New session 14 of user core. Oct 29 00:42:41.475506 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 29 00:42:41.602485 sshd[5060]: Connection closed by 10.0.0.1 port 34224 Oct 29 00:42:41.602870 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:41.606503 systemd[1]: sshd@13-10.0.0.76:22-10.0.0.1:34224.service: Deactivated successfully. Oct 29 00:42:41.609232 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 00:42:41.611189 systemd-logind[1612]: Session 14 logged out. Waiting for processes to exit. Oct 29 00:42:41.613067 systemd-logind[1612]: Removed session 14. Oct 29 00:42:46.619555 systemd[1]: Started sshd@14-10.0.0.76:22-10.0.0.1:34228.service - OpenSSH per-connection server daemon (10.0.0.1:34228). Oct 29 00:42:46.679885 sshd[5077]: Accepted publickey for core from 10.0.0.1 port 34228 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:46.681197 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:46.685730 systemd-logind[1612]: New session 15 of user core. Oct 29 00:42:46.696517 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 29 00:42:46.813222 sshd[5080]: Connection closed by 10.0.0.1 port 34228 Oct 29 00:42:46.813667 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:46.818800 systemd[1]: sshd@14-10.0.0.76:22-10.0.0.1:34228.service: Deactivated successfully. Oct 29 00:42:46.821318 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 00:42:46.822243 systemd-logind[1612]: Session 15 logged out. Waiting for processes to exit. Oct 29 00:42:46.823548 systemd-logind[1612]: Removed session 15. Oct 29 00:42:47.329001 kubelet[2791]: E1029 00:42:47.328905 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-859b7" podUID="8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7" Oct 29 00:42:48.328629 kubelet[2791]: E1029 00:42:48.328549 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bfff496d5-c6zrj" podUID="6c784c32-7754-4ac4-867b-3b42e31408ba" Oct 29 00:42:48.328972 kubelet[2791]: E1029 00:42:48.328926 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-975dfcd7c-99nd7" podUID="27ae937a-cd86-446b-82dd-96e1560db234" Oct 29 00:42:51.329114 kubelet[2791]: E1029 00:42:51.329041 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bfff496d5-kp5mz" podUID="f8b0c773-fb85-46da-8c2b-d019ba347f69" Oct 29 00:42:51.330591 kubelet[2791]: E1029 00:42:51.329128 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f6cdb67dc-b8vlf" podUID="8171a21c-8e10-436f-a7d7-4945a41db439" Oct 29 00:42:51.830535 systemd[1]: Started sshd@15-10.0.0.76:22-10.0.0.1:59948.service - OpenSSH per-connection server daemon (10.0.0.1:59948). Oct 29 00:42:51.887370 sshd[5095]: Accepted publickey for core from 10.0.0.1 port 59948 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:51.889052 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:51.893632 systemd-logind[1612]: New session 16 of user core. Oct 29 00:42:51.905543 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 29 00:42:52.021521 sshd[5098]: Connection closed by 10.0.0.1 port 59948 Oct 29 00:42:52.021877 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:52.027178 systemd[1]: sshd@15-10.0.0.76:22-10.0.0.1:59948.service: Deactivated successfully. Oct 29 00:42:52.029805 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 00:42:52.030912 systemd-logind[1612]: Session 16 logged out. Waiting for processes to exit. Oct 29 00:42:52.032194 systemd-logind[1612]: Removed session 16. Oct 29 00:42:53.331266 kubelet[2791]: E1029 00:42:53.331161 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgj2m" podUID="384bfac2-527a-4555-ad7a-580a89495c1d" Oct 29 00:42:54.984681 containerd[1636]: time="2025-10-29T00:42:54.984617176Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed20490d18f0039df3a90eab75e00bee32479b32e400f556580bd98ae3bf0f0c\" id:\"f54fa480dac967934b3ef7fae03903063c28d5dd7a27f35111e1b142c5fb2358\" pid:5124 exited_at:{seconds:1761698574 nanos:984129790}" Oct 29 00:42:55.328767 kubelet[2791]: E1029 00:42:55.328655 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:57.044294 systemd[1]: Started sshd@16-10.0.0.76:22-10.0.0.1:59950.service - OpenSSH per-connection server daemon (10.0.0.1:59950). Oct 29 00:42:57.109235 sshd[5139]: Accepted publickey for core from 10.0.0.1 port 59950 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:42:57.111270 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:42:57.116146 systemd-logind[1612]: New session 17 of user core. Oct 29 00:42:57.124537 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 29 00:42:57.237830 sshd[5142]: Connection closed by 10.0.0.1 port 59950 Oct 29 00:42:57.238708 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:57.245628 systemd-logind[1612]: Session 17 logged out. Waiting for processes to exit. Oct 29 00:42:57.246029 systemd[1]: sshd@16-10.0.0.76:22-10.0.0.1:59950.service: Deactivated successfully. Oct 29 00:42:57.248326 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 00:42:57.251447 systemd-logind[1612]: Removed session 17. Oct 29 00:43:01.332011 containerd[1636]: time="2025-10-29T00:43:01.331937707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:43:01.970567 containerd[1636]: time="2025-10-29T00:43:01.970507311Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:02.105995 containerd[1636]: time="2025-10-29T00:43:02.105894587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:43:02.105995 containerd[1636]: time="2025-10-29T00:43:02.105959661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:43:02.106281 kubelet[2791]: E1029 00:43:02.106193 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:43:02.106281 kubelet[2791]: E1029 00:43:02.106269 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:43:02.106878 kubelet[2791]: E1029 00:43:02.106437 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wc26h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-859b7_calico-system(8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:02.107715 kubelet[2791]: E1029 00:43:02.107664 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-859b7" podUID="8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7" Oct 29 00:43:02.251102 systemd[1]: Started sshd@17-10.0.0.76:22-10.0.0.1:56992.service - OpenSSH per-connection server daemon (10.0.0.1:56992). Oct 29 00:43:02.314692 sshd[5161]: Accepted publickey for core from 10.0.0.1 port 56992 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:02.316521 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:02.321733 systemd-logind[1612]: New session 18 of user core. Oct 29 00:43:02.329682 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 29 00:43:02.329925 containerd[1636]: time="2025-10-29T00:43:02.329766282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:43:02.463538 sshd[5164]: Connection closed by 10.0.0.1 port 56992 Oct 29 00:43:02.463902 sshd-session[5161]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:02.476286 systemd[1]: sshd@17-10.0.0.76:22-10.0.0.1:56992.service: Deactivated successfully. Oct 29 00:43:02.478547 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 00:43:02.479513 systemd-logind[1612]: Session 18 logged out. Waiting for processes to exit. Oct 29 00:43:02.483866 systemd[1]: Started sshd@18-10.0.0.76:22-10.0.0.1:57002.service - OpenSSH per-connection server daemon (10.0.0.1:57002). Oct 29 00:43:02.484686 systemd-logind[1612]: Removed session 18. Oct 29 00:43:02.556782 sshd[5178]: Accepted publickey for core from 10.0.0.1 port 57002 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:02.558547 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:02.563478 systemd-logind[1612]: New session 19 of user core. Oct 29 00:43:02.573689 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 29 00:43:02.639612 containerd[1636]: time="2025-10-29T00:43:02.639535581Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:02.670217 containerd[1636]: time="2025-10-29T00:43:02.670135269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:43:02.670420 containerd[1636]: time="2025-10-29T00:43:02.670218198Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:43:02.670578 kubelet[2791]: E1029 00:43:02.670513 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:43:02.670644 kubelet[2791]: E1029 00:43:02.670581 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:43:02.670935 kubelet[2791]: E1029 00:43:02.670861 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp6kh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bfff496d5-c6zrj_calico-apiserver(6c784c32-7754-4ac4-867b-3b42e31408ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:02.671196 containerd[1636]: time="2025-10-29T00:43:02.670922745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:43:02.672286 kubelet[2791]: E1029 00:43:02.672220 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bfff496d5-c6zrj" podUID="6c784c32-7754-4ac4-867b-3b42e31408ba" Oct 29 00:43:03.037453 containerd[1636]: time="2025-10-29T00:43:03.037371158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:03.038722 containerd[1636]: time="2025-10-29T00:43:03.038683886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:43:03.038779 containerd[1636]: time="2025-10-29T00:43:03.038767946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:43:03.039026 kubelet[2791]: E1029 00:43:03.038964 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:43:03.039083 kubelet[2791]: E1029 00:43:03.039029 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:43:03.039284 kubelet[2791]: E1029 00:43:03.039172 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:456170bd4865481e98f102db6aff1c52,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xblp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-975dfcd7c-99nd7_calico-system(27ae937a-cd86-446b-82dd-96e1560db234): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:03.041291 containerd[1636]: time="2025-10-29T00:43:03.041258315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:43:03.245869 sshd[5181]: Connection closed by 10.0.0.1 port 57002 Oct 29 00:43:03.246323 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:03.257686 systemd[1]: sshd@18-10.0.0.76:22-10.0.0.1:57002.service: Deactivated successfully. Oct 29 00:43:03.260142 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 00:43:03.261097 systemd-logind[1612]: Session 19 logged out. Waiting for processes to exit. Oct 29 00:43:03.264451 systemd[1]: Started sshd@19-10.0.0.76:22-10.0.0.1:57012.service - OpenSSH per-connection server daemon (10.0.0.1:57012). Oct 29 00:43:03.265466 systemd-logind[1612]: Removed session 19. Oct 29 00:43:03.320125 sshd[5192]: Accepted publickey for core from 10.0.0.1 port 57012 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:03.321557 sshd-session[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:03.326349 systemd-logind[1612]: New session 20 of user core. Oct 29 00:43:03.328413 kubelet[2791]: E1029 00:43:03.328095 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:03.332607 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 29 00:43:03.351488 containerd[1636]: time="2025-10-29T00:43:03.351419615Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:03.352614 containerd[1636]: time="2025-10-29T00:43:03.352554043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:43:03.352672 containerd[1636]: time="2025-10-29T00:43:03.352607305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:43:03.352843 kubelet[2791]: E1029 00:43:03.352806 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:43:03.352891 kubelet[2791]: E1029 00:43:03.352851 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:43:03.353035 kubelet[2791]: E1029 00:43:03.352976 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xblp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-975dfcd7c-99nd7_calico-system(27ae937a-cd86-446b-82dd-96e1560db234): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:03.354832 kubelet[2791]: E1029 00:43:03.354785 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-975dfcd7c-99nd7" podUID="27ae937a-cd86-446b-82dd-96e1560db234" Oct 29 00:43:03.826936 sshd[5195]: Connection closed by 10.0.0.1 port 57012 Oct 29 00:43:03.827822 sshd-session[5192]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:03.841323 systemd[1]: sshd@19-10.0.0.76:22-10.0.0.1:57012.service: Deactivated successfully. Oct 29 00:43:03.844762 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 00:43:03.849652 systemd-logind[1612]: Session 20 logged out. Waiting for processes to exit. Oct 29 00:43:03.854115 systemd[1]: Started sshd@20-10.0.0.76:22-10.0.0.1:57028.service - OpenSSH per-connection server daemon (10.0.0.1:57028). Oct 29 00:43:03.855623 systemd-logind[1612]: Removed session 20. Oct 29 00:43:03.938932 sshd[5219]: Accepted publickey for core from 10.0.0.1 port 57028 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:03.940432 sshd-session[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:03.945165 systemd-logind[1612]: New session 21 of user core. Oct 29 00:43:03.950536 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 29 00:43:04.173455 sshd[5222]: Connection closed by 10.0.0.1 port 57028 Oct 29 00:43:04.175490 sshd-session[5219]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:04.188879 systemd[1]: sshd@20-10.0.0.76:22-10.0.0.1:57028.service: Deactivated successfully. Oct 29 00:43:04.191537 systemd[1]: session-21.scope: Deactivated successfully. Oct 29 00:43:04.192429 systemd-logind[1612]: Session 21 logged out. Waiting for processes to exit. Oct 29 00:43:04.195928 systemd[1]: Started sshd@21-10.0.0.76:22-10.0.0.1:57040.service - OpenSSH per-connection server daemon (10.0.0.1:57040). Oct 29 00:43:04.197394 systemd-logind[1612]: Removed session 21. Oct 29 00:43:04.249441 sshd[5234]: Accepted publickey for core from 10.0.0.1 port 57040 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:04.251313 sshd-session[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:04.256928 systemd-logind[1612]: New session 22 of user core. Oct 29 00:43:04.263543 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 29 00:43:04.329421 kubelet[2791]: E1029 00:43:04.329009 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:04.331579 containerd[1636]: time="2025-10-29T00:43:04.331530646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 00:43:04.391824 sshd[5237]: Connection closed by 10.0.0.1 port 57040 Oct 29 00:43:04.392115 sshd-session[5234]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:04.396538 systemd[1]: sshd@21-10.0.0.76:22-10.0.0.1:57040.service: Deactivated successfully. Oct 29 00:43:04.398941 systemd[1]: session-22.scope: Deactivated successfully. Oct 29 00:43:04.399889 systemd-logind[1612]: Session 22 logged out. Waiting for processes to exit. Oct 29 00:43:04.401681 systemd-logind[1612]: Removed session 22. Oct 29 00:43:04.704100 containerd[1636]: time="2025-10-29T00:43:04.704024191Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:04.705322 containerd[1636]: time="2025-10-29T00:43:04.705278416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 00:43:04.705428 containerd[1636]: time="2025-10-29T00:43:04.705406270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 00:43:04.705645 kubelet[2791]: E1029 00:43:04.705561 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:43:04.705745 kubelet[2791]: E1029 00:43:04.705660 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:43:04.705875 kubelet[2791]: E1029 00:43:04.705817 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrvg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgj2m_calico-system(384bfac2-527a-4555-ad7a-580a89495c1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:04.707731 containerd[1636]: time="2025-10-29T00:43:04.707666235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 00:43:05.034405 containerd[1636]: time="2025-10-29T00:43:05.034336826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:05.035507 containerd[1636]: time="2025-10-29T00:43:05.035450702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 00:43:05.035556 containerd[1636]: time="2025-10-29T00:43:05.035531737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 00:43:05.035796 kubelet[2791]: E1029 00:43:05.035730 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:43:05.035864 kubelet[2791]: E1029 00:43:05.035801 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:43:05.035995 kubelet[2791]: E1029 00:43:05.035945 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrvg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgj2m_calico-system(384bfac2-527a-4555-ad7a-580a89495c1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:05.037205 kubelet[2791]: E1029 00:43:05.037119 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgj2m" podUID="384bfac2-527a-4555-ad7a-580a89495c1d" Oct 29 00:43:05.330202 containerd[1636]: time="2025-10-29T00:43:05.329973806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:43:05.800091 containerd[1636]: time="2025-10-29T00:43:05.800035354Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:05.801319 containerd[1636]: time="2025-10-29T00:43:05.801261294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:43:05.801378 containerd[1636]: time="2025-10-29T00:43:05.801334914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:43:05.801583 kubelet[2791]: E1029 00:43:05.801518 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:43:05.801881 kubelet[2791]: E1029 00:43:05.801594 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:43:05.801881 kubelet[2791]: E1029 00:43:05.801752 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nktq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bfff496d5-kp5mz_calico-apiserver(f8b0c773-fb85-46da-8c2b-d019ba347f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:05.802978 kubelet[2791]: E1029 00:43:05.802947 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bfff496d5-kp5mz" podUID="f8b0c773-fb85-46da-8c2b-d019ba347f69" Oct 29 00:43:06.329006 containerd[1636]: time="2025-10-29T00:43:06.328954463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:43:06.700155 containerd[1636]: time="2025-10-29T00:43:06.700022121Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:43:06.701189 containerd[1636]: time="2025-10-29T00:43:06.701151306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:43:06.701255 containerd[1636]: time="2025-10-29T00:43:06.701226841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:43:06.701375 kubelet[2791]: E1029 00:43:06.701340 2791 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:43:06.701459 kubelet[2791]: E1029 00:43:06.701401 2791 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:43:06.701591 kubelet[2791]: E1029 00:43:06.701525 2791 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85b4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f6cdb67dc-b8vlf_calico-system(8171a21c-8e10-436f-a7d7-4945a41db439): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:43:06.702677 kubelet[2791]: E1029 00:43:06.702643 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f6cdb67dc-b8vlf" podUID="8171a21c-8e10-436f-a7d7-4945a41db439" Oct 29 00:43:09.410667 systemd[1]: Started sshd@22-10.0.0.76:22-10.0.0.1:57048.service - OpenSSH per-connection server daemon (10.0.0.1:57048). Oct 29 00:43:09.455657 sshd[5253]: Accepted publickey for core from 10.0.0.1 port 57048 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:09.457116 sshd-session[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:09.462099 systemd-logind[1612]: New session 23 of user core. Oct 29 00:43:09.472519 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 29 00:43:09.589091 sshd[5256]: Connection closed by 10.0.0.1 port 57048 Oct 29 00:43:09.589422 sshd-session[5253]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:09.594427 systemd[1]: sshd@22-10.0.0.76:22-10.0.0.1:57048.service: Deactivated successfully. Oct 29 00:43:09.596670 systemd[1]: session-23.scope: Deactivated successfully. Oct 29 00:43:09.597633 systemd-logind[1612]: Session 23 logged out. Waiting for processes to exit. Oct 29 00:43:09.599276 systemd-logind[1612]: Removed session 23. Oct 29 00:43:13.328928 kubelet[2791]: E1029 00:43:13.328547 2791 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:13.330040 kubelet[2791]: E1029 00:43:13.329666 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bfff496d5-c6zrj" podUID="6c784c32-7754-4ac4-867b-3b42e31408ba" Oct 29 00:43:13.330040 kubelet[2791]: E1029 00:43:13.329751 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-859b7" podUID="8b1da9ce-4db1-4e33-ac07-f0fdf633e7c7" Oct 29 00:43:14.602821 systemd[1]: Started sshd@23-10.0.0.76:22-10.0.0.1:35936.service - OpenSSH per-connection server daemon (10.0.0.1:35936). Oct 29 00:43:14.663532 sshd[5272]: Accepted publickey for core from 10.0.0.1 port 35936 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:14.667402 sshd-session[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:14.673961 systemd-logind[1612]: New session 24 of user core. Oct 29 00:43:14.680539 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 29 00:43:14.823220 sshd[5275]: Connection closed by 10.0.0.1 port 35936 Oct 29 00:43:14.823596 sshd-session[5272]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:14.829570 systemd[1]: sshd@23-10.0.0.76:22-10.0.0.1:35936.service: Deactivated successfully. Oct 29 00:43:14.832798 systemd[1]: session-24.scope: Deactivated successfully. Oct 29 00:43:14.836220 systemd-logind[1612]: Session 24 logged out. Waiting for processes to exit. Oct 29 00:43:14.838030 systemd-logind[1612]: Removed session 24. Oct 29 00:43:15.329899 kubelet[2791]: E1029 00:43:15.329821 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgj2m" podUID="384bfac2-527a-4555-ad7a-580a89495c1d" Oct 29 00:43:18.328503 kubelet[2791]: E1029 00:43:18.328445 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bfff496d5-kp5mz" podUID="f8b0c773-fb85-46da-8c2b-d019ba347f69" Oct 29 00:43:18.329469 kubelet[2791]: E1029 00:43:18.329374 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-975dfcd7c-99nd7" podUID="27ae937a-cd86-446b-82dd-96e1560db234" Oct 29 00:43:19.328987 kubelet[2791]: E1029 00:43:19.328919 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f6cdb67dc-b8vlf" podUID="8171a21c-8e10-436f-a7d7-4945a41db439" Oct 29 00:43:19.846043 systemd[1]: Started sshd@24-10.0.0.76:22-10.0.0.1:35938.service - OpenSSH per-connection server daemon (10.0.0.1:35938). Oct 29 00:43:19.920530 sshd[5288]: Accepted publickey for core from 10.0.0.1 port 35938 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:19.923987 sshd-session[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:19.931545 systemd-logind[1612]: New session 25 of user core. Oct 29 00:43:19.936553 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 29 00:43:20.056734 sshd[5291]: Connection closed by 10.0.0.1 port 35938 Oct 29 00:43:20.057065 sshd-session[5288]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:20.061758 systemd[1]: sshd@24-10.0.0.76:22-10.0.0.1:35938.service: Deactivated successfully. Oct 29 00:43:20.064055 systemd[1]: session-25.scope: Deactivated successfully. Oct 29 00:43:20.064962 systemd-logind[1612]: Session 25 logged out. Waiting for processes to exit. Oct 29 00:43:20.066519 systemd-logind[1612]: Removed session 25. Oct 29 00:43:25.012418 containerd[1636]: time="2025-10-29T00:43:25.012353698Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed20490d18f0039df3a90eab75e00bee32479b32e400f556580bd98ae3bf0f0c\" id:\"d5677fc2c9e4d57f3b227a1647797e9fe632fe1b62a66a238df2097352996e2f\" pid:5317 exited_at:{seconds:1761698605 nanos:11997402}" Oct 29 00:43:25.076551 systemd[1]: Started sshd@25-10.0.0.76:22-10.0.0.1:58744.service - OpenSSH per-connection server daemon (10.0.0.1:58744). Oct 29 00:43:25.163969 sshd[5330]: Accepted publickey for core from 10.0.0.1 port 58744 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:25.166683 sshd-session[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:25.179428 systemd-logind[1612]: New session 26 of user core. Oct 29 00:43:25.185576 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 29 00:43:25.314113 sshd[5333]: Connection closed by 10.0.0.1 port 58744 Oct 29 00:43:25.315337 sshd-session[5330]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:25.321820 systemd[1]: sshd@25-10.0.0.76:22-10.0.0.1:58744.service: Deactivated successfully. Oct 29 00:43:25.324296 systemd[1]: session-26.scope: Deactivated successfully. Oct 29 00:43:25.326123 systemd-logind[1612]: Session 26 logged out. Waiting for processes to exit. Oct 29 00:43:25.328232 systemd-logind[1612]: Removed session 26. Oct 29 00:43:26.329841 kubelet[2791]: E1029 00:43:26.329777 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bfff496d5-c6zrj" podUID="6c784c32-7754-4ac4-867b-3b42e31408ba" Oct 29 00:43:26.331702 kubelet[2791]: E1029 00:43:26.331631 2791 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgj2m" podUID="384bfac2-527a-4555-ad7a-580a89495c1d"