Nov 4 04:55:53.829953 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 03:00:51 -00 2025 Nov 4 04:55:53.829997 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 04:55:53.830007 kernel: BIOS-provided physical RAM map: Nov 4 04:55:53.830014 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 4 04:55:53.830021 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 4 04:55:53.830034 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Nov 4 04:55:53.830042 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 4 04:55:53.830050 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Nov 4 04:55:53.830059 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 4 04:55:53.830067 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 4 04:55:53.830074 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 4 04:55:53.830081 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 4 04:55:53.830088 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 4 04:55:53.830102 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 4 04:55:53.830110 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 4 04:55:53.830118 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 4 04:55:53.830128 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 4 04:55:53.830135 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 04:55:53.830149 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 4 04:55:53.830157 kernel: NX (Execute Disable) protection: active Nov 4 04:55:53.830164 kernel: APIC: Static calls initialized Nov 4 04:55:53.830172 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Nov 4 04:55:53.830180 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Nov 4 04:55:53.830187 kernel: extended physical RAM map: Nov 4 04:55:53.830195 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 4 04:55:53.830202 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 4 04:55:53.830210 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Nov 4 04:55:53.830217 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 4 04:55:53.830240 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Nov 4 04:55:53.830248 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Nov 4 04:55:53.830256 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Nov 4 04:55:53.830264 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Nov 4 04:55:53.830271 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Nov 4 04:55:53.830279 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 4 04:55:53.830286 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 4 04:55:53.830294 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 4 04:55:53.830301 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 4 04:55:53.830309 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 4 04:55:53.830316 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 4 04:55:53.830330 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 4 04:55:53.830345 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 4 04:55:53.830353 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 4 04:55:53.830361 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 04:55:53.830375 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 4 04:55:53.830382 kernel: efi: EFI v2.7 by EDK II Nov 4 04:55:53.830390 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Nov 4 04:55:53.830398 kernel: random: crng init done Nov 4 04:55:53.830406 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Nov 4 04:55:53.830413 kernel: secureboot: Secure boot enabled Nov 4 04:55:53.830421 kernel: SMBIOS 2.8 present. Nov 4 04:55:53.830429 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 4 04:55:53.830437 kernel: DMI: Memory slots populated: 1/1 Nov 4 04:55:53.830444 kernel: Hypervisor detected: KVM Nov 4 04:55:53.830458 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 4 04:55:53.830466 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 04:55:53.830474 kernel: kvm-clock: using sched offset of 7076648501 cycles Nov 4 04:55:53.830482 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 04:55:53.830491 kernel: tsc: Detected 2794.750 MHz processor Nov 4 04:55:53.830500 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 04:55:53.830508 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 04:55:53.830516 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 4 04:55:53.830527 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 4 04:55:53.830543 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 04:55:53.830553 kernel: Using GB pages for direct mapping Nov 4 04:55:53.830561 kernel: ACPI: Early table checksum verification disabled Nov 4 04:55:53.830570 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Nov 4 04:55:53.830578 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 4 04:55:53.830586 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:55:53.830594 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:55:53.830608 kernel: ACPI: FACS 0x000000009BBDD000 000040 Nov 4 04:55:53.830617 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:55:53.830625 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:55:53.830633 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:55:53.830641 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:55:53.830649 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 4 04:55:53.830658 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Nov 4 04:55:53.830672 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Nov 4 04:55:53.830681 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Nov 4 04:55:53.830688 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Nov 4 04:55:53.830696 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Nov 4 04:55:53.830704 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Nov 4 04:55:53.830712 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Nov 4 04:55:53.830720 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Nov 4 04:55:53.830734 kernel: No NUMA configuration found Nov 4 04:55:53.830743 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Nov 4 04:55:53.830751 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Nov 4 04:55:53.830759 kernel: Zone ranges: Nov 4 04:55:53.830767 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 04:55:53.830775 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Nov 4 04:55:53.830783 kernel: Normal empty Nov 4 04:55:53.830791 kernel: Device empty Nov 4 04:55:53.830805 kernel: Movable zone start for each node Nov 4 04:55:53.830813 kernel: Early memory node ranges Nov 4 04:55:53.830821 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Nov 4 04:55:53.830829 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Nov 4 04:55:53.830837 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Nov 4 04:55:53.830845 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Nov 4 04:55:53.830853 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Nov 4 04:55:53.830868 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Nov 4 04:55:53.830923 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 04:55:53.830931 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Nov 4 04:55:53.830940 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 4 04:55:53.830948 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 4 04:55:53.830956 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 4 04:55:53.830964 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Nov 4 04:55:53.830972 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 4 04:55:53.830989 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 04:55:53.830997 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 04:55:53.831005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 4 04:55:53.831016 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 04:55:53.831024 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 04:55:53.831032 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 04:55:53.831040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 04:55:53.831055 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 04:55:53.831063 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 4 04:55:53.831071 kernel: TSC deadline timer available Nov 4 04:55:53.831079 kernel: CPU topo: Max. logical packages: 1 Nov 4 04:55:53.831087 kernel: CPU topo: Max. logical dies: 1 Nov 4 04:55:53.831120 kernel: CPU topo: Max. dies per package: 1 Nov 4 04:55:53.831128 kernel: CPU topo: Max. threads per core: 1 Nov 4 04:55:53.831136 kernel: CPU topo: Num. cores per package: 4 Nov 4 04:55:53.831145 kernel: CPU topo: Num. threads per package: 4 Nov 4 04:55:53.831161 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 4 04:55:53.831170 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 4 04:55:53.831178 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 4 04:55:53.831186 kernel: kvm-guest: setup PV sched yield Nov 4 04:55:53.831201 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 4 04:55:53.831209 kernel: Booting paravirtualized kernel on KVM Nov 4 04:55:53.831218 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 04:55:53.831234 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 4 04:55:53.831243 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 4 04:55:53.831252 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 4 04:55:53.831260 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 4 04:55:53.831275 kernel: kvm-guest: PV spinlocks enabled Nov 4 04:55:53.831284 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 4 04:55:53.831294 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 04:55:53.831302 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 04:55:53.831311 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 04:55:53.831319 kernel: Fallback order for Node 0: 0 Nov 4 04:55:53.831328 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Nov 4 04:55:53.831343 kernel: Policy zone: DMA32 Nov 4 04:55:53.831351 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 04:55:53.831360 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 4 04:55:53.831368 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 04:55:53.831377 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 04:55:53.831385 kernel: Dynamic Preempt: voluntary Nov 4 04:55:53.831393 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 04:55:53.831409 kernel: rcu: RCU event tracing is enabled. Nov 4 04:55:53.831417 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 4 04:55:53.831426 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 04:55:53.831434 kernel: Rude variant of Tasks RCU enabled. Nov 4 04:55:53.831443 kernel: Tracing variant of Tasks RCU enabled. Nov 4 04:55:53.831451 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 04:55:53.831459 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 4 04:55:53.831474 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 04:55:53.831484 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 04:55:53.831494 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 04:55:53.831503 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 4 04:55:53.831511 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 04:55:53.831519 kernel: Console: colour dummy device 80x25 Nov 4 04:55:53.831528 kernel: printk: legacy console [ttyS0] enabled Nov 4 04:55:53.831543 kernel: ACPI: Core revision 20240827 Nov 4 04:55:53.831551 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 4 04:55:53.831560 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 04:55:53.831568 kernel: x2apic enabled Nov 4 04:55:53.831576 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 04:55:53.831585 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 4 04:55:53.831593 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 4 04:55:53.831615 kernel: kvm-guest: setup PV IPIs Nov 4 04:55:53.831626 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 04:55:53.831636 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 4 04:55:53.831647 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 4 04:55:53.831658 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 4 04:55:53.831668 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 4 04:55:53.831678 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 4 04:55:53.831692 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 04:55:53.831703 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 04:55:53.831711 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 04:55:53.831720 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 4 04:55:53.831728 kernel: active return thunk: retbleed_return_thunk Nov 4 04:55:53.831736 kernel: RETBleed: Mitigation: untrained return thunk Nov 4 04:55:53.831745 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 04:55:53.831759 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 04:55:53.831767 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 4 04:55:53.831777 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 4 04:55:53.831785 kernel: active return thunk: srso_return_thunk Nov 4 04:55:53.831794 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 4 04:55:53.831802 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 04:55:53.831810 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 04:55:53.831825 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 04:55:53.831834 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 04:55:53.831842 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 4 04:55:53.831850 kernel: Freeing SMP alternatives memory: 32K Nov 4 04:55:53.831858 kernel: pid_max: default: 32768 minimum: 301 Nov 4 04:55:53.831867 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 04:55:53.831896 kernel: landlock: Up and running. Nov 4 04:55:53.831912 kernel: SELinux: Initializing. Nov 4 04:55:53.831921 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 04:55:53.831929 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 04:55:53.831938 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 4 04:55:53.831946 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 4 04:55:53.831954 kernel: ... version: 0 Nov 4 04:55:53.831965 kernel: ... bit width: 48 Nov 4 04:55:53.831980 kernel: ... generic registers: 6 Nov 4 04:55:53.831988 kernel: ... value mask: 0000ffffffffffff Nov 4 04:55:53.831997 kernel: ... max period: 00007fffffffffff Nov 4 04:55:53.832005 kernel: ... fixed-purpose events: 0 Nov 4 04:55:53.832013 kernel: ... event mask: 000000000000003f Nov 4 04:55:53.832021 kernel: signal: max sigframe size: 1776 Nov 4 04:55:53.832030 kernel: rcu: Hierarchical SRCU implementation. Nov 4 04:55:53.832038 kernel: rcu: Max phase no-delay instances is 400. Nov 4 04:55:53.832054 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 04:55:53.832062 kernel: smp: Bringing up secondary CPUs ... Nov 4 04:55:53.832071 kernel: smpboot: x86: Booting SMP configuration: Nov 4 04:55:53.832079 kernel: .... node #0, CPUs: #1 #2 #3 Nov 4 04:55:53.832087 kernel: smp: Brought up 1 node, 4 CPUs Nov 4 04:55:53.832095 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 4 04:55:53.832104 kernel: Memory: 2427644K/2552216K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15360K init, 2684K bss, 118632K reserved, 0K cma-reserved) Nov 4 04:55:53.832119 kernel: devtmpfs: initialized Nov 4 04:55:53.832128 kernel: x86/mm: Memory block size: 128MB Nov 4 04:55:53.832136 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Nov 4 04:55:53.832145 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Nov 4 04:55:53.832153 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 04:55:53.832161 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 4 04:55:53.832170 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 04:55:53.832185 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 04:55:53.832193 kernel: audit: initializing netlink subsys (disabled) Nov 4 04:55:53.832202 kernel: audit: type=2000 audit(1762232149.668:1): state=initialized audit_enabled=0 res=1 Nov 4 04:55:53.832210 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 04:55:53.832218 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 04:55:53.832234 kernel: cpuidle: using governor menu Nov 4 04:55:53.832242 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 04:55:53.832258 kernel: dca service started, version 1.12.1 Nov 4 04:55:53.832267 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 4 04:55:53.832275 kernel: PCI: Using configuration type 1 for base access Nov 4 04:55:53.832284 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 04:55:53.832292 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 04:55:53.832300 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 04:55:53.832309 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 04:55:53.832324 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 04:55:53.832332 kernel: ACPI: Added _OSI(Module Device) Nov 4 04:55:53.832340 kernel: ACPI: Added _OSI(Processor Device) Nov 4 04:55:53.832349 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 04:55:53.832357 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 04:55:53.832365 kernel: ACPI: Interpreter enabled Nov 4 04:55:53.832373 kernel: ACPI: PM: (supports S0 S5) Nov 4 04:55:53.832388 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 04:55:53.832396 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 04:55:53.832405 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 04:55:53.832413 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 4 04:55:53.832422 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 04:55:53.832695 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 04:55:53.832925 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 4 04:55:53.833101 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 4 04:55:53.833113 kernel: PCI host bridge to bus 0000:00 Nov 4 04:55:53.833296 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 04:55:53.833452 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 04:55:53.833606 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 04:55:53.833773 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 4 04:55:53.833943 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 4 04:55:53.834097 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 4 04:55:53.834297 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 04:55:53.834491 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 4 04:55:53.834683 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 4 04:55:53.834852 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 4 04:55:53.835051 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 4 04:55:53.835221 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 4 04:55:53.835412 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 04:55:53.835601 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 04:55:53.835783 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 4 04:55:53.835979 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 4 04:55:53.836149 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 4 04:55:53.836339 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 04:55:53.836524 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 4 04:55:53.836695 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 4 04:55:53.836892 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 4 04:55:53.837071 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 04:55:53.837249 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 4 04:55:53.837419 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 4 04:55:53.837596 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 4 04:55:53.837779 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 4 04:55:53.837981 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 4 04:55:53.838151 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 4 04:55:53.838340 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 4 04:55:53.838508 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 4 04:55:53.838685 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 4 04:55:53.838890 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 4 04:55:53.839074 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 4 04:55:53.839087 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 04:55:53.839096 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 04:55:53.839105 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 04:55:53.839113 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 04:55:53.839133 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 4 04:55:53.839142 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 4 04:55:53.839150 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 4 04:55:53.839159 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 4 04:55:53.839167 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 4 04:55:53.839176 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 4 04:55:53.839184 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 4 04:55:53.839199 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 4 04:55:53.839208 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 4 04:55:53.839216 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 4 04:55:53.839233 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 4 04:55:53.839241 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 4 04:55:53.839249 kernel: iommu: Default domain type: Translated Nov 4 04:55:53.839259 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 04:55:53.839275 kernel: efivars: Registered efivars operations Nov 4 04:55:53.839283 kernel: PCI: Using ACPI for IRQ routing Nov 4 04:55:53.839291 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 04:55:53.839300 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Nov 4 04:55:53.839308 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Nov 4 04:55:53.839317 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Nov 4 04:55:53.839325 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Nov 4 04:55:53.839333 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Nov 4 04:55:53.839510 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 4 04:55:53.839681 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 4 04:55:53.839846 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 04:55:53.839858 kernel: vgaarb: loaded Nov 4 04:55:53.839882 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 4 04:55:53.839891 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 4 04:55:53.839958 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 04:55:53.839967 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 04:55:53.839976 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 04:55:53.839984 kernel: pnp: PnP ACPI init Nov 4 04:55:53.840174 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 4 04:55:53.840187 kernel: pnp: PnP ACPI: found 6 devices Nov 4 04:55:53.840196 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 04:55:53.840213 kernel: NET: Registered PF_INET protocol family Nov 4 04:55:53.840232 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 04:55:53.840241 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 4 04:55:53.840249 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 04:55:53.840258 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 04:55:53.840267 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 4 04:55:53.840276 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 4 04:55:53.840297 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 04:55:53.840305 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 04:55:53.840314 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 04:55:53.840322 kernel: NET: Registered PF_XDP protocol family Nov 4 04:55:53.840494 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 4 04:55:53.840664 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 4 04:55:53.840833 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 04:55:53.841005 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 04:55:53.841161 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 04:55:53.841328 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 4 04:55:53.841483 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 4 04:55:53.841661 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 4 04:55:53.841675 kernel: PCI: CLS 0 bytes, default 64 Nov 4 04:55:53.841699 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 4 04:55:53.841708 kernel: Initialise system trusted keyrings Nov 4 04:55:53.841716 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 4 04:55:53.841725 kernel: Key type asymmetric registered Nov 4 04:55:53.841733 kernel: Asymmetric key parser 'x509' registered Nov 4 04:55:53.841784 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 04:55:53.841798 kernel: io scheduler mq-deadline registered Nov 4 04:55:53.841814 kernel: io scheduler kyber registered Nov 4 04:55:53.841822 kernel: io scheduler bfq registered Nov 4 04:55:53.841831 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 04:55:53.841840 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 4 04:55:53.841849 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 4 04:55:53.841858 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 4 04:55:53.841866 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 04:55:53.841895 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 04:55:53.841904 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 04:55:53.841912 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 04:55:53.841921 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 04:55:53.842099 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 4 04:55:53.842111 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 04:55:53.842281 kernel: rtc_cmos 00:04: registered as rtc0 Nov 4 04:55:53.842457 kernel: rtc_cmos 00:04: setting system clock to 2025-11-04T04:55:51 UTC (1762232151) Nov 4 04:55:53.842618 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 4 04:55:53.842630 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 4 04:55:53.842639 kernel: efifb: probing for efifb Nov 4 04:55:53.842650 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 4 04:55:53.842660 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 4 04:55:53.842683 kernel: efifb: scrolling: redraw Nov 4 04:55:53.842694 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 4 04:55:53.842705 kernel: Console: switching to colour frame buffer device 160x50 Nov 4 04:55:53.842724 kernel: fb0: EFI VGA frame buffer device Nov 4 04:55:53.842735 kernel: pstore: Using crash dump compression: deflate Nov 4 04:55:53.842753 kernel: pstore: Registered efi_pstore as persistent store backend Nov 4 04:55:53.842764 kernel: NET: Registered PF_INET6 protocol family Nov 4 04:55:53.842775 kernel: Segment Routing with IPv6 Nov 4 04:55:53.842786 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 04:55:53.842797 kernel: NET: Registered PF_PACKET protocol family Nov 4 04:55:53.842808 kernel: Key type dns_resolver registered Nov 4 04:55:53.842819 kernel: IPI shorthand broadcast: enabled Nov 4 04:55:53.842837 kernel: sched_clock: Marking stable (2271004683, 513326638)->(3262571821, -478240500) Nov 4 04:55:53.842848 kernel: registered taskstats version 1 Nov 4 04:55:53.842857 kernel: Loading compiled-in X.509 certificates Nov 4 04:55:53.842866 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: dafbe857b8ef9eaad4381fdddb57853ce023547e' Nov 4 04:55:53.842894 kernel: Demotion targets for Node 0: null Nov 4 04:55:53.842903 kernel: Key type .fscrypt registered Nov 4 04:55:53.842912 kernel: Key type fscrypt-provisioning registered Nov 4 04:55:53.842928 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 04:55:53.842937 kernel: ima: Allocated hash algorithm: sha1 Nov 4 04:55:53.842946 kernel: ima: No architecture policies found Nov 4 04:55:53.842960 kernel: clk: Disabling unused clocks Nov 4 04:55:53.842969 kernel: Freeing unused kernel image (initmem) memory: 15360K Nov 4 04:55:53.842978 kernel: Write protecting the kernel read-only data: 45056k Nov 4 04:55:53.842987 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 4 04:55:53.843002 kernel: Run /init as init process Nov 4 04:55:53.843010 kernel: with arguments: Nov 4 04:55:53.843019 kernel: /init Nov 4 04:55:53.843028 kernel: with environment: Nov 4 04:55:53.843036 kernel: HOME=/ Nov 4 04:55:53.843045 kernel: TERM=linux Nov 4 04:55:53.843054 kernel: SCSI subsystem initialized Nov 4 04:55:53.843069 kernel: libata version 3.00 loaded. Nov 4 04:55:53.843257 kernel: ahci 0000:00:1f.2: version 3.0 Nov 4 04:55:53.843270 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 4 04:55:53.843452 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 4 04:55:53.843623 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 4 04:55:53.843796 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 4 04:55:53.844014 kernel: scsi host0: ahci Nov 4 04:55:53.844210 kernel: scsi host1: ahci Nov 4 04:55:53.844402 kernel: scsi host2: ahci Nov 4 04:55:53.844592 kernel: scsi host3: ahci Nov 4 04:55:53.844794 kernel: scsi host4: ahci Nov 4 04:55:53.844990 kernel: scsi host5: ahci Nov 4 04:55:53.845014 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 4 04:55:53.845023 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 4 04:55:53.845032 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 4 04:55:53.845041 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 4 04:55:53.845050 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 4 04:55:53.845060 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 4 04:55:53.845076 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 4 04:55:53.845085 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 4 04:55:53.845094 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 4 04:55:53.845103 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 4 04:55:53.845111 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 4 04:55:53.845120 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 04:55:53.845129 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 4 04:55:53.845137 kernel: ata3.00: applying bridge limits Nov 4 04:55:53.845152 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 4 04:55:53.845161 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 04:55:53.845170 kernel: ata3.00: configured for UDMA/100 Nov 4 04:55:53.845379 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 4 04:55:53.845591 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 4 04:55:53.845836 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 4 04:55:53.845881 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 04:55:53.845891 kernel: GPT:16515071 != 27000831 Nov 4 04:55:53.845900 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 04:55:53.845909 kernel: GPT:16515071 != 27000831 Nov 4 04:55:53.845917 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 04:55:53.845926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 4 04:55:53.846117 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 4 04:55:53.846140 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 4 04:55:53.846334 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 4 04:55:53.846346 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 04:55:53.846355 kernel: device-mapper: uevent: version 1.0.3 Nov 4 04:55:53.846365 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 04:55:53.846374 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 04:55:53.846392 kernel: raid6: avx2x4 gen() 21767 MB/s Nov 4 04:55:53.846401 kernel: raid6: avx2x2 gen() 21120 MB/s Nov 4 04:55:53.846409 kernel: raid6: avx2x1 gen() 16961 MB/s Nov 4 04:55:53.846418 kernel: raid6: using algorithm avx2x4 gen() 21767 MB/s Nov 4 04:55:53.846427 kernel: raid6: .... xor() 5124 MB/s, rmw enabled Nov 4 04:55:53.846436 kernel: raid6: using avx2x2 recovery algorithm Nov 4 04:55:53.846445 kernel: xor: automatically using best checksumming function avx Nov 4 04:55:53.846454 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 04:55:53.846475 kernel: BTRFS: device fsid 6f0a5369-79b6-4a87-b9a6-85ec05be306c devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (180) Nov 4 04:55:53.846484 kernel: BTRFS info (device dm-0): first mount of filesystem 6f0a5369-79b6-4a87-b9a6-85ec05be306c Nov 4 04:55:53.846493 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:55:53.846502 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 04:55:53.846511 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 04:55:53.846520 kernel: loop: module loaded Nov 4 04:55:53.846529 kernel: loop0: detected capacity change from 0 to 100136 Nov 4 04:55:53.846544 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 04:55:53.846554 systemd[1]: Successfully made /usr/ read-only. Nov 4 04:55:53.846566 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 04:55:53.846576 systemd[1]: Detected virtualization kvm. Nov 4 04:55:53.846585 systemd[1]: Detected architecture x86-64. Nov 4 04:55:53.846601 systemd[1]: Running in initrd. Nov 4 04:55:53.846610 systemd[1]: No hostname configured, using default hostname. Nov 4 04:55:53.846620 systemd[1]: Hostname set to . Nov 4 04:55:53.846629 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 04:55:53.846638 systemd[1]: Queued start job for default target initrd.target. Nov 4 04:55:53.846648 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 04:55:53.846657 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:55:53.846674 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:55:53.846684 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 04:55:53.846693 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 04:55:53.846703 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 04:55:53.846713 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 04:55:53.846728 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:55:53.846738 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:55:53.846747 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 04:55:53.846757 systemd[1]: Reached target paths.target - Path Units. Nov 4 04:55:53.846766 systemd[1]: Reached target slices.target - Slice Units. Nov 4 04:55:53.846775 systemd[1]: Reached target swap.target - Swaps. Nov 4 04:55:53.846785 systemd[1]: Reached target timers.target - Timer Units. Nov 4 04:55:53.846801 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 04:55:53.846810 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 04:55:53.846820 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 04:55:53.846829 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 04:55:53.846838 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:55:53.846848 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 04:55:53.846857 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:55:53.846886 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 04:55:53.846896 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 04:55:53.846905 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 04:55:53.846914 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 04:55:53.846924 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 04:55:53.846934 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 04:55:53.846943 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 04:55:53.846960 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 04:55:53.846969 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 04:55:53.846978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:55:53.846988 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 04:55:53.847003 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:55:53.847013 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 04:55:53.847051 systemd-journald[316]: Collecting audit messages is disabled. Nov 4 04:55:53.847080 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 04:55:53.847090 systemd-journald[316]: Journal started Nov 4 04:55:53.847110 systemd-journald[316]: Runtime Journal (/run/log/journal/a74e0912ecee4ac1b6da7f6d7d1c4743) is 5.9M, max 47.8M, 41.8M free. Nov 4 04:55:53.848903 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 04:55:53.853165 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 04:55:53.863941 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 04:55:53.867940 kernel: Bridge firewalling registered Nov 4 04:55:53.867591 systemd-modules-load[321]: Inserted module 'br_netfilter' Nov 4 04:55:53.870148 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 04:55:53.873532 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 04:55:53.877508 systemd-tmpfiles[332]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 04:55:53.893181 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 04:55:53.898697 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:55:53.902579 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:55:53.906984 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 04:55:53.920593 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 04:55:53.922750 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:55:53.926993 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 04:55:53.952133 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:55:53.953151 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 04:55:53.958219 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 04:55:53.985949 dracut-cmdline[361]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 04:55:54.039391 systemd-resolved[348]: Positive Trust Anchors: Nov 4 04:55:54.039411 systemd-resolved[348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 04:55:54.039415 systemd-resolved[348]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 04:55:54.039447 systemd-resolved[348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 04:55:54.082973 systemd-resolved[348]: Defaulting to hostname 'linux'. Nov 4 04:55:54.084594 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 04:55:54.086559 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:55:54.149924 kernel: Loading iSCSI transport class v2.0-870. Nov 4 04:55:54.175043 kernel: iscsi: registered transport (tcp) Nov 4 04:55:54.203563 kernel: iscsi: registered transport (qla4xxx) Nov 4 04:55:54.203660 kernel: QLogic iSCSI HBA Driver Nov 4 04:55:54.235135 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 04:55:54.275832 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:55:54.277805 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 04:55:54.338080 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 04:55:54.343328 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 04:55:54.346379 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 04:55:54.418584 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 04:55:54.421972 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:55:54.463424 systemd-udevd[597]: Using default interface naming scheme 'v257'. Nov 4 04:55:54.482148 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:55:54.498019 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 04:55:54.529093 dracut-pre-trigger[658]: rd.md=0: removing MD RAID activation Nov 4 04:55:54.551192 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 04:55:54.554092 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 04:55:54.576601 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 04:55:54.578772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 04:55:54.616325 systemd-networkd[726]: lo: Link UP Nov 4 04:55:54.616336 systemd-networkd[726]: lo: Gained carrier Nov 4 04:55:54.619242 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 04:55:54.622957 systemd[1]: Reached target network.target - Network. Nov 4 04:55:54.676811 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:55:54.682326 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 04:55:54.786899 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 04:55:54.808398 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:55:54.821758 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:55:54.825260 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:55:54.868397 kernel: AES CTR mode by8 optimization enabled Nov 4 04:55:54.825266 systemd-networkd[726]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 04:55:54.826105 systemd-networkd[726]: eth0: Link UP Nov 4 04:55:54.826322 systemd-networkd[726]: eth0: Gained carrier Nov 4 04:55:54.826331 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:55:54.889971 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 04:55:54.853168 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:55:54.862984 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:55:54.882398 systemd-networkd[726]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 04:55:54.906864 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 4 04:55:54.966389 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 4 04:55:54.975219 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 4 04:55:54.987174 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 04:55:54.989061 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 04:55:54.991708 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:55:54.991770 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:55:54.996664 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:55:55.018562 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:55:55.031561 systemd-resolved[348]: Detected conflict on linux IN A 10.0.0.39 Nov 4 04:55:55.031583 systemd-resolved[348]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Nov 4 04:55:55.046683 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:55:55.070486 disk-uuid[850]: Primary Header is updated. Nov 4 04:55:55.070486 disk-uuid[850]: Secondary Entries is updated. Nov 4 04:55:55.070486 disk-uuid[850]: Secondary Header is updated. Nov 4 04:55:55.071203 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 04:55:55.074956 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 04:55:55.076282 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:55:55.076889 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 04:55:55.079645 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 04:55:55.248459 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 04:55:56.244927 disk-uuid[859]: Warning: The kernel is still using the old partition table. Nov 4 04:55:56.244927 disk-uuid[859]: The new table will be used at the next reboot or after you Nov 4 04:55:56.244927 disk-uuid[859]: run partprobe(8) or kpartx(8) Nov 4 04:55:56.244927 disk-uuid[859]: The operation has completed successfully. Nov 4 04:55:56.257767 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 04:55:56.258051 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 04:55:56.264418 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 04:55:56.322922 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (878) Nov 4 04:55:56.326215 kernel: BTRFS info (device vda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:55:56.326243 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:55:56.330275 kernel: BTRFS info (device vda6): turning on async discard Nov 4 04:55:56.330306 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 04:55:56.342775 kernel: BTRFS info (device vda6): last unmount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:55:56.344017 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 04:55:56.349597 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 04:55:56.838180 systemd-networkd[726]: eth0: Gained IPv6LL Nov 4 04:55:56.936969 ignition[897]: Ignition 2.22.0 Nov 4 04:55:56.936992 ignition[897]: Stage: fetch-offline Nov 4 04:55:56.937063 ignition[897]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:55:56.937077 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 04:55:56.937204 ignition[897]: parsed url from cmdline: "" Nov 4 04:55:56.937208 ignition[897]: no config URL provided Nov 4 04:55:56.937214 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 04:55:56.937227 ignition[897]: no config at "/usr/lib/ignition/user.ign" Nov 4 04:55:56.937283 ignition[897]: op(1): [started] loading QEMU firmware config module Nov 4 04:55:56.937292 ignition[897]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 4 04:55:56.956282 ignition[897]: op(1): [finished] loading QEMU firmware config module Nov 4 04:55:57.040112 ignition[897]: parsing config with SHA512: b40d01f1f57409f011ec210e78f991db6938eb9e798e2227878c1d76fa7fd53da2e03ca28affbc4fa7c5be8b8027e7c83ea1ba53583e80169fc7f79c3b9ae0e3 Nov 4 04:55:57.048437 unknown[897]: fetched base config from "system" Nov 4 04:55:57.048452 unknown[897]: fetched user config from "qemu" Nov 4 04:55:57.048938 ignition[897]: fetch-offline: fetch-offline passed Nov 4 04:55:57.049018 ignition[897]: Ignition finished successfully Nov 4 04:55:57.052917 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 04:55:57.054085 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 4 04:55:57.055615 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 04:55:57.098994 ignition[907]: Ignition 2.22.0 Nov 4 04:55:57.099009 ignition[907]: Stage: kargs Nov 4 04:55:57.099208 ignition[907]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:55:57.099220 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 04:55:57.100351 ignition[907]: kargs: kargs passed Nov 4 04:55:57.100407 ignition[907]: Ignition finished successfully Nov 4 04:55:57.108796 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 04:55:57.111931 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 04:55:57.449562 ignition[915]: Ignition 2.22.0 Nov 4 04:55:57.449577 ignition[915]: Stage: disks Nov 4 04:55:57.449801 ignition[915]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:55:57.449811 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 04:55:57.456194 ignition[915]: disks: disks passed Nov 4 04:55:57.456265 ignition[915]: Ignition finished successfully Nov 4 04:55:57.460793 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 04:55:57.462083 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 04:55:57.464955 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 04:55:57.465480 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 04:55:57.472478 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 04:55:57.477883 systemd[1]: Reached target basic.target - Basic System. Nov 4 04:55:57.483325 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 04:55:57.532432 systemd-fsck[925]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 4 04:55:57.542136 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 04:55:57.548416 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 04:55:57.670925 kernel: EXT4-fs (vda9): mounted filesystem c35327fb-3cdd-496e-85aa-9e1b4133507f r/w with ordered data mode. Quota mode: none. Nov 4 04:55:57.672112 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 04:55:57.675715 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 04:55:57.680895 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 04:55:57.684910 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 04:55:57.688171 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 04:55:57.688230 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 04:55:57.688262 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 04:55:57.701118 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 04:55:57.706258 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 04:55:57.802105 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (933) Nov 4 04:55:57.806068 kernel: BTRFS info (device vda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:55:57.806100 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:55:57.810827 kernel: BTRFS info (device vda6): turning on async discard Nov 4 04:55:57.810861 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 04:55:57.813199 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 04:55:57.845249 initrd-setup-root[957]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 04:55:57.851909 initrd-setup-root[964]: cut: /sysroot/etc/group: No such file or directory Nov 4 04:55:57.859100 initrd-setup-root[971]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 04:55:57.864073 initrd-setup-root[978]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 04:55:57.991758 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 04:55:57.998367 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 04:55:58.003527 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 04:55:58.029466 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 04:55:58.050251 kernel: BTRFS info (device vda6): last unmount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:55:58.065230 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 04:55:58.168483 ignition[1048]: INFO : Ignition 2.22.0 Nov 4 04:55:58.168483 ignition[1048]: INFO : Stage: mount Nov 4 04:55:58.184327 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:55:58.184327 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 04:55:58.184327 ignition[1048]: INFO : mount: mount passed Nov 4 04:55:58.184327 ignition[1048]: INFO : Ignition finished successfully Nov 4 04:55:58.172849 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 04:55:58.184550 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 04:55:58.675656 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 04:55:58.718948 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1059) Nov 4 04:55:58.720947 kernel: BTRFS info (device vda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:55:58.720988 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:55:58.728842 kernel: BTRFS info (device vda6): turning on async discard Nov 4 04:55:58.728914 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 04:55:58.731404 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 04:55:58.845955 ignition[1077]: INFO : Ignition 2.22.0 Nov 4 04:55:58.845955 ignition[1077]: INFO : Stage: files Nov 4 04:55:58.848830 ignition[1077]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:55:58.848830 ignition[1077]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 04:55:58.848830 ignition[1077]: DEBUG : files: compiled without relabeling support, skipping Nov 4 04:55:58.856729 ignition[1077]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 04:55:58.856729 ignition[1077]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 04:55:58.864198 ignition[1077]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 04:55:58.866938 ignition[1077]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 04:55:58.869616 unknown[1077]: wrote ssh authorized keys file for user: core Nov 4 04:55:58.872692 ignition[1077]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 04:55:58.875543 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 4 04:55:58.879614 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 4 04:55:58.972687 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 04:55:59.037599 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 4 04:55:59.037599 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 04:55:59.044456 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 04:55:59.044456 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 04:55:59.044456 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 04:55:59.044456 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 04:55:59.044456 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 04:55:59.044456 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 04:55:59.044456 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 04:55:59.101030 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 04:55:59.104357 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 04:55:59.104357 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 04:55:59.137815 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 04:55:59.137815 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 04:55:59.146258 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 4 04:55:59.493082 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 04:56:00.223971 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 4 04:56:00.223971 ignition[1077]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 4 04:56:00.233810 ignition[1077]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 04:56:00.233810 ignition[1077]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 04:56:00.233810 ignition[1077]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 4 04:56:00.233810 ignition[1077]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 4 04:56:00.233810 ignition[1077]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 04:56:00.233810 ignition[1077]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 04:56:00.233810 ignition[1077]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 4 04:56:00.233810 ignition[1077]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 4 04:56:00.268737 ignition[1077]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 04:56:00.276834 ignition[1077]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 04:56:00.279776 ignition[1077]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 4 04:56:00.279776 ignition[1077]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 4 04:56:00.279776 ignition[1077]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 04:56:00.279776 ignition[1077]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 04:56:00.279776 ignition[1077]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 04:56:00.279776 ignition[1077]: INFO : files: files passed Nov 4 04:56:00.279776 ignition[1077]: INFO : Ignition finished successfully Nov 4 04:56:00.297693 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 04:56:00.303086 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 04:56:00.307384 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 04:56:00.328242 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 04:56:00.328400 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 04:56:00.335070 initrd-setup-root-after-ignition[1108]: grep: /sysroot/oem/oem-release: No such file or directory Nov 4 04:56:00.339675 initrd-setup-root-after-ignition[1110]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:56:00.342849 initrd-setup-root-after-ignition[1110]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:56:00.345570 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:56:00.349849 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 04:56:00.350695 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 04:56:00.355300 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 04:56:00.425988 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 04:56:00.426231 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 04:56:00.427709 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 04:56:00.432337 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 04:56:00.438414 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 04:56:00.441991 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 04:56:00.478324 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 04:56:00.485353 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 04:56:00.517671 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 04:56:00.518295 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:56:00.519006 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:56:00.524566 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 04:56:00.530540 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 04:56:00.530804 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 04:56:00.535933 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 04:56:00.536840 systemd[1]: Stopped target basic.target - Basic System. Nov 4 04:56:00.540033 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 04:56:00.546103 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 04:56:00.547108 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 04:56:00.554367 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 04:56:00.555661 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 04:56:00.559001 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 04:56:00.565577 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 04:56:00.566328 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 04:56:00.569944 systemd[1]: Stopped target swap.target - Swaps. Nov 4 04:56:00.573428 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 04:56:00.573580 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 04:56:00.579199 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:56:00.580334 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:56:00.584641 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 04:56:00.584919 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:56:00.588524 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 04:56:00.588653 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 04:56:00.596259 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 04:56:00.596475 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 04:56:00.600309 systemd[1]: Stopped target paths.target - Path Units. Nov 4 04:56:00.601166 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 04:56:00.609016 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:56:00.610421 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 04:56:00.619002 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 04:56:00.619811 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 04:56:00.619926 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 04:56:00.623424 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 04:56:00.623510 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 04:56:00.626641 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 04:56:00.626758 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 04:56:00.629678 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 04:56:00.629787 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 04:56:00.634737 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 04:56:00.636703 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 04:56:00.636852 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:56:00.657610 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 04:56:00.661498 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 04:56:00.663394 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:56:00.668002 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 04:56:00.668256 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:56:00.669557 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 04:56:00.669740 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 04:56:00.685381 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 04:56:00.685539 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 04:56:00.691604 ignition[1134]: INFO : Ignition 2.22.0 Nov 4 04:56:00.691604 ignition[1134]: INFO : Stage: umount Nov 4 04:56:00.694248 ignition[1134]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:56:00.694248 ignition[1134]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 04:56:00.694248 ignition[1134]: INFO : umount: umount passed Nov 4 04:56:00.694248 ignition[1134]: INFO : Ignition finished successfully Nov 4 04:56:00.696343 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 04:56:00.696520 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 04:56:00.700863 systemd[1]: Stopped target network.target - Network. Nov 4 04:56:00.701895 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 04:56:00.701985 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 04:56:00.706394 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 04:56:00.706481 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 04:56:00.709261 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 04:56:00.709325 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 04:56:00.712487 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 04:56:00.712538 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 04:56:00.715547 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 04:56:00.718728 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 04:56:00.724284 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 04:56:00.732351 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 04:56:00.732555 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 04:56:00.740821 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 04:56:00.741250 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 04:56:00.749798 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 04:56:00.750596 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 04:56:00.750663 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:56:00.754979 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 04:56:00.758368 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 04:56:00.758461 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 04:56:00.759557 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 04:56:00.759605 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:56:00.765594 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 04:56:00.765657 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 04:56:00.766485 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:56:00.788411 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 04:56:00.788694 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:56:00.792949 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 04:56:00.793019 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 04:56:00.797388 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 04:56:00.797456 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:56:00.801666 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 04:56:00.801738 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 04:56:00.806083 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 04:56:00.806172 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 04:56:00.810133 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 04:56:00.810255 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 04:56:00.819166 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 04:56:00.820752 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 04:56:00.820854 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:56:00.826244 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 04:56:00.826347 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:56:00.827429 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:56:00.827489 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:56:00.834269 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 04:56:00.846109 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 04:56:00.848329 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 04:56:00.848450 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 04:56:00.858524 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 04:56:00.858675 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 04:56:00.868028 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 04:56:00.868205 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 04:56:00.869639 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 04:56:00.874924 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 04:56:00.902890 systemd[1]: Switching root. Nov 4 04:56:00.941957 systemd-journald[316]: Journal stopped Nov 4 04:56:03.015424 systemd-journald[316]: Received SIGTERM from PID 1 (systemd). Nov 4 04:56:03.015513 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 04:56:03.015533 kernel: SELinux: policy capability open_perms=1 Nov 4 04:56:03.015549 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 04:56:03.015564 kernel: SELinux: policy capability always_check_network=0 Nov 4 04:56:03.015580 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 04:56:03.015597 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 04:56:03.015655 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 04:56:03.015677 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 04:56:03.015694 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 04:56:03.015715 kernel: audit: type=1403 audit(1762232161.812:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 04:56:03.015733 systemd[1]: Successfully loaded SELinux policy in 75.424ms. Nov 4 04:56:03.015762 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.939ms. Nov 4 04:56:03.015781 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 04:56:03.015817 systemd[1]: Detected virtualization kvm. Nov 4 04:56:03.015836 systemd[1]: Detected architecture x86-64. Nov 4 04:56:03.015853 systemd[1]: Detected first boot. Nov 4 04:56:03.015887 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 04:56:03.015905 zram_generator::config[1182]: No configuration found. Nov 4 04:56:03.015924 kernel: Guest personality initialized and is inactive Nov 4 04:56:03.015949 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 04:56:03.016024 kernel: Initialized host personality Nov 4 04:56:03.016051 kernel: NET: Registered PF_VSOCK protocol family Nov 4 04:56:03.016077 systemd[1]: Populated /etc with preset unit settings. Nov 4 04:56:03.016095 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 04:56:03.016121 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 04:56:03.016139 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 04:56:03.016161 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 04:56:03.016190 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 04:56:03.016207 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 04:56:03.016224 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 04:56:03.016240 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 04:56:03.016258 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 04:56:03.016276 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 04:56:03.016304 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 04:56:03.016322 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:56:03.016340 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:56:03.016358 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 04:56:03.016375 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 04:56:03.016394 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 04:56:03.016413 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 04:56:03.016440 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 04:56:03.016457 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:56:03.016475 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:56:03.016495 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 04:56:03.016512 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 04:56:03.016529 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 04:56:03.016547 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 04:56:03.016574 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:56:03.016591 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 04:56:03.016609 systemd[1]: Reached target slices.target - Slice Units. Nov 4 04:56:03.016627 systemd[1]: Reached target swap.target - Swaps. Nov 4 04:56:03.016644 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 04:56:03.016661 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 04:56:03.016678 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 04:56:03.016705 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:56:03.016723 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 04:56:03.016747 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:56:03.016768 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 04:56:03.016785 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 04:56:03.016802 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 04:56:03.016820 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 04:56:03.016847 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:56:03.016866 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 04:56:03.016898 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 04:56:03.016918 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 04:56:03.016936 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 04:56:03.016953 systemd[1]: Reached target machines.target - Containers. Nov 4 04:56:03.016971 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 04:56:03.016999 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:56:03.017017 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 04:56:03.017034 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 04:56:03.017056 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:56:03.017074 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 04:56:03.017092 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:56:03.017137 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 04:56:03.017156 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:56:03.017174 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 04:56:03.017191 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 04:56:03.017209 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 04:56:03.017226 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 04:56:03.017243 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 04:56:03.017271 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:56:03.017289 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 04:56:03.017306 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 04:56:03.017323 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 04:56:03.017342 kernel: fuse: init (API version 7.41) Nov 4 04:56:03.017359 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 04:56:03.017377 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 04:56:03.017404 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 04:56:03.017450 systemd-journald[1246]: Collecting audit messages is disabled. Nov 4 04:56:03.017483 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:56:03.017518 systemd-journald[1246]: Journal started Nov 4 04:56:03.017555 systemd-journald[1246]: Runtime Journal (/run/log/journal/a74e0912ecee4ac1b6da7f6d7d1c4743) is 5.9M, max 47.8M, 41.8M free. Nov 4 04:56:03.023754 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 04:56:03.023808 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 04:56:02.491316 systemd[1]: Queued start job for default target multi-user.target. Nov 4 04:56:02.505024 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 4 04:56:02.505818 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 04:56:03.030407 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 04:56:03.030538 kernel: ACPI: bus type drm_connector registered Nov 4 04:56:03.035445 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 04:56:03.056440 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 04:56:03.058628 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 04:56:03.060828 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 04:56:03.063169 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:56:03.065965 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 04:56:03.066257 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 04:56:03.072864 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:56:03.073289 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:56:03.076398 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 04:56:03.076716 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 04:56:03.079342 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:56:03.079652 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:56:03.082387 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 04:56:03.082699 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 04:56:03.085261 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:56:03.085665 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:56:03.088386 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 04:56:03.091403 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:56:03.108222 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 04:56:03.111515 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 04:56:03.131421 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 04:56:03.134604 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 04:56:03.138774 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 04:56:03.142556 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 04:56:03.145970 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 04:56:03.146099 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 04:56:03.149560 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 04:56:03.153354 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:56:03.156026 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 04:56:03.160131 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 04:56:03.162534 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 04:56:03.164435 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 04:56:03.166612 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 04:56:03.171098 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 04:56:03.177061 systemd-journald[1246]: Time spent on flushing to /var/log/journal/a74e0912ecee4ac1b6da7f6d7d1c4743 is 18.941ms for 1024 entries. Nov 4 04:56:03.177061 systemd-journald[1246]: System Journal (/var/log/journal/a74e0912ecee4ac1b6da7f6d7d1c4743) is 8M, max 163.5M, 155.5M free. Nov 4 04:56:03.390007 systemd-journald[1246]: Received client request to flush runtime journal. Nov 4 04:56:03.390095 kernel: loop1: detected capacity change from 0 to 119080 Nov 4 04:56:03.390144 kernel: loop2: detected capacity change from 0 to 224512 Nov 4 04:56:03.390167 kernel: loop3: detected capacity change from 0 to 111544 Nov 4 04:56:03.179003 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 04:56:03.182178 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 04:56:03.185536 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:56:03.188033 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 04:56:03.218687 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:56:03.339862 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 04:56:03.345972 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 04:56:03.356207 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 04:56:03.359866 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 04:56:03.370576 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 04:56:03.393135 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 04:56:03.403924 kernel: loop4: detected capacity change from 0 to 119080 Nov 4 04:56:03.416904 kernel: loop5: detected capacity change from 0 to 224512 Nov 4 04:56:03.432194 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 04:56:03.437771 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 04:56:03.442165 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 04:56:03.451065 kernel: loop6: detected capacity change from 0 to 111544 Nov 4 04:56:03.491736 (sd-merge)[1317]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 4 04:56:03.500318 (sd-merge)[1317]: Merged extensions into '/usr'. Nov 4 04:56:03.625510 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 04:56:03.629363 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 04:56:03.646529 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Nov 4 04:56:03.646558 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Nov 4 04:56:03.647679 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 04:56:03.652388 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:56:03.659253 systemd[1]: Reload requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 04:56:03.659499 systemd[1]: Reloading... Nov 4 04:56:03.745921 zram_generator::config[1351]: No configuration found. Nov 4 04:56:03.855317 systemd-resolved[1320]: Positive Trust Anchors: Nov 4 04:56:03.855335 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 04:56:03.855340 systemd-resolved[1320]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 04:56:03.855373 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 04:56:03.860008 systemd-resolved[1320]: Defaulting to hostname 'linux'. Nov 4 04:56:03.979249 systemd[1]: Reloading finished in 319 ms. Nov 4 04:56:04.012001 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 04:56:04.014273 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 04:56:04.016524 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 04:56:04.021143 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:56:04.040891 systemd[1]: Starting ensure-sysext.service... Nov 4 04:56:04.060605 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 04:56:04.086050 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 04:56:04.086086 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 04:56:04.086369 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 04:56:04.086593 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 04:56:04.087707 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 04:56:04.088131 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Nov 4 04:56:04.088229 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Nov 4 04:56:04.089577 systemd[1]: Reload requested from client PID 1394 ('systemctl') (unit ensure-sysext.service)... Nov 4 04:56:04.089602 systemd[1]: Reloading... Nov 4 04:56:04.100307 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 04:56:04.100323 systemd-tmpfiles[1395]: Skipping /boot Nov 4 04:56:04.116062 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 04:56:04.116347 systemd-tmpfiles[1395]: Skipping /boot Nov 4 04:56:04.174903 zram_generator::config[1425]: No configuration found. Nov 4 04:56:04.472210 systemd[1]: Reloading finished in 382 ms. Nov 4 04:56:04.496348 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:56:04.543133 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 04:56:04.546659 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 04:56:04.551283 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 04:56:04.560497 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 04:56:04.567195 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 04:56:04.576131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:56:04.578253 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:56:04.586430 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:56:04.591297 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:56:04.595214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:56:04.595382 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:56:04.596656 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:56:04.596923 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:56:04.601866 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:56:04.602271 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:56:04.613836 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 04:56:04.624956 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 04:56:04.637388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:56:04.637697 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:56:04.641421 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:56:04.641831 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:56:04.646730 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:56:04.653525 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:56:04.655499 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:56:04.656818 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:56:04.657999 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 04:56:04.658288 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:56:04.668991 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 04:56:04.674494 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:56:04.674791 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:56:04.677662 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:56:04.678002 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:56:04.690760 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 04:56:04.696512 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:56:04.696796 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:56:04.698485 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:56:04.704055 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 04:56:04.708767 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:56:04.721416 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:56:04.727081 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:56:04.727174 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:56:04.729788 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:56:04.733470 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:56:04.737866 systemd[1]: Finished ensure-sysext.service. Nov 4 04:56:04.741088 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:56:04.741529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:56:04.745039 augenrules[1506]: No rules Nov 4 04:56:04.745315 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 04:56:04.745740 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 04:56:04.749309 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 04:56:04.749762 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 04:56:04.752936 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 04:56:04.756082 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:56:04.756378 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:56:04.759277 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:56:04.759560 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:56:04.772054 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 04:56:04.772178 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 04:56:04.772219 systemd-udevd[1502]: Using default interface naming scheme 'v257'. Nov 4 04:56:04.774784 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 04:56:04.777226 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 04:56:04.808137 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:56:04.821178 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 04:56:04.922849 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 04:56:04.990471 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 04:56:04.995560 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 04:56:04.999206 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 04:56:05.032839 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 04:56:05.032953 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 4 04:56:05.043132 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 4 04:56:05.043410 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 4 04:56:05.053339 systemd-networkd[1531]: lo: Link UP Nov 4 04:56:05.053351 systemd-networkd[1531]: lo: Gained carrier Nov 4 04:56:05.061120 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 04:56:05.065850 systemd[1]: Reached target network.target - Network. Nov 4 04:56:05.090650 systemd-networkd[1531]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:56:05.090663 systemd-networkd[1531]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 04:56:05.096591 systemd-networkd[1531]: eth0: Link UP Nov 4 04:56:05.096929 systemd-networkd[1531]: eth0: Gained carrier Nov 4 04:56:05.096958 systemd-networkd[1531]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:56:05.099595 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 04:56:05.107261 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 04:56:05.113085 kernel: ACPI: button: Power Button [PWRF] Nov 4 04:56:05.115028 systemd-networkd[1531]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 04:56:05.117403 systemd-timesyncd[1518]: Network configuration changed, trying to establish connection. Nov 4 04:56:06.249181 systemd-resolved[1320]: Clock change detected. Flushing caches. Nov 4 04:56:06.252636 systemd-timesyncd[1518]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 4 04:56:06.254378 systemd-timesyncd[1518]: Initial clock synchronization to Tue 2025-11-04 04:56:06.249066 UTC. Nov 4 04:56:06.286305 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 04:56:06.575693 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:56:06.625194 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 04:56:06.710986 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 04:56:06.718384 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:56:06.720154 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:56:06.730149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:56:06.744730 kernel: kvm_amd: TSC scaling supported Nov 4 04:56:06.744825 kernel: kvm_amd: Nested Virtualization enabled Nov 4 04:56:06.744876 kernel: kvm_amd: Nested Paging enabled Nov 4 04:56:06.744895 kernel: kvm_amd: LBR virtualization supported Nov 4 04:56:06.744912 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 4 04:56:06.744929 kernel: kvm_amd: Virtual GIF supported Nov 4 04:56:06.776908 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 04:56:06.782188 kernel: EDAC MC: Ver: 3.0.0 Nov 4 04:56:06.833926 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:56:06.876689 ldconfig[1465]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 04:56:06.883628 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 04:56:06.887391 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 04:56:06.928617 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 04:56:06.930826 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 04:56:06.932654 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 04:56:06.934731 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 04:56:06.936861 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 04:56:06.939208 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 04:56:06.941296 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 04:56:06.943586 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 04:56:06.945898 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 04:56:06.945952 systemd[1]: Reached target paths.target - Path Units. Nov 4 04:56:06.947622 systemd[1]: Reached target timers.target - Timer Units. Nov 4 04:56:06.950426 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 04:56:06.954494 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 04:56:06.961633 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 04:56:06.964219 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 04:56:06.966553 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 04:56:06.971649 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 04:56:06.973882 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 04:56:06.976809 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 04:56:06.979885 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 04:56:06.981755 systemd[1]: Reached target basic.target - Basic System. Nov 4 04:56:06.983533 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 04:56:06.983576 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 04:56:06.985215 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 04:56:06.988775 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 04:56:06.991979 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 04:56:06.996187 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 04:56:07.000480 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 04:56:07.002526 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 04:56:07.004350 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 04:56:07.008351 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 04:56:07.014426 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 04:56:07.017126 jq[1596]: false Nov 4 04:56:07.019397 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 04:56:07.024446 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 04:56:07.030779 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Refreshing passwd entry cache Nov 4 04:56:07.029754 oslogin_cache_refresh[1598]: Refreshing passwd entry cache Nov 4 04:56:07.037420 extend-filesystems[1597]: Found /dev/vda6 Nov 4 04:56:07.035338 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 04:56:07.037422 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 04:56:07.038189 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 04:56:07.039467 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 04:56:07.041690 extend-filesystems[1597]: Found /dev/vda9 Nov 4 04:56:07.043990 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Failure getting users, quitting Nov 4 04:56:07.043981 oslogin_cache_refresh[1598]: Failure getting users, quitting Nov 4 04:56:07.044085 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 04:56:07.044018 oslogin_cache_refresh[1598]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 04:56:07.044168 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Refreshing group entry cache Nov 4 04:56:07.044097 oslogin_cache_refresh[1598]: Refreshing group entry cache Nov 4 04:56:07.046463 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 04:56:07.054529 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Failure getting groups, quitting Nov 4 04:56:07.054529 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 04:56:07.054511 oslogin_cache_refresh[1598]: Failure getting groups, quitting Nov 4 04:56:07.054531 oslogin_cache_refresh[1598]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 04:56:07.055796 extend-filesystems[1597]: Checking size of /dev/vda9 Nov 4 04:56:07.058925 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 04:56:07.062718 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 04:56:07.063067 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 04:56:07.063539 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 04:56:07.063906 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 04:56:07.126339 jq[1615]: true Nov 4 04:56:07.126541 update_engine[1611]: I20251104 04:56:07.104068 1611 main.cc:92] Flatcar Update Engine starting Nov 4 04:56:07.066497 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 04:56:07.066916 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 04:56:07.126931 tar[1621]: linux-amd64/LICENSE Nov 4 04:56:07.126931 tar[1621]: linux-amd64/helm Nov 4 04:56:07.073173 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 04:56:07.073587 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 04:56:07.138561 jq[1622]: true Nov 4 04:56:07.145405 extend-filesystems[1597]: Resized partition /dev/vda9 Nov 4 04:56:07.149861 extend-filesystems[1645]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 04:56:07.158136 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 4 04:56:07.181438 dbus-daemon[1594]: [system] SELinux support is enabled Nov 4 04:56:07.181744 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 04:56:07.187193 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 04:56:07.187265 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 04:56:07.213272 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 04:56:07.222521 update_engine[1611]: I20251104 04:56:07.214890 1611 update_check_scheduler.cc:74] Next update check in 2m43s Nov 4 04:56:07.213322 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 04:56:07.219126 systemd[1]: Started update-engine.service - Update Engine. Nov 4 04:56:07.226751 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 04:56:07.234401 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 4 04:56:07.270890 extend-filesystems[1645]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 4 04:56:07.270890 extend-filesystems[1645]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 4 04:56:07.270890 extend-filesystems[1645]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 4 04:56:07.277370 extend-filesystems[1597]: Resized filesystem in /dev/vda9 Nov 4 04:56:07.287644 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 04:56:07.288391 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 04:56:07.326047 systemd-logind[1608]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 04:56:07.327265 systemd-logind[1608]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 04:56:07.330341 systemd-logind[1608]: New seat seat0. Nov 4 04:56:07.333519 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 04:56:07.370551 locksmithd[1647]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 04:56:07.420688 bash[1664]: Updated "/home/core/.ssh/authorized_keys" Nov 4 04:56:07.422277 systemd-networkd[1531]: eth0: Gained IPv6LL Nov 4 04:56:07.423489 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 04:56:07.426687 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 4 04:56:07.430740 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 04:56:07.433318 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 04:56:07.436858 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 4 04:56:07.441116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:56:07.450037 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 04:56:07.503506 sshd_keygen[1642]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 04:56:07.545610 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 04:56:07.551061 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 4 04:56:07.551979 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 4 04:56:07.561152 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 04:56:07.563573 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 04:56:07.607846 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 04:56:07.629721 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 04:56:07.630018 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 04:56:07.634747 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 04:56:07.697047 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 04:56:07.706690 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 04:56:07.712685 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 04:56:07.715423 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 04:56:07.880492 containerd[1635]: time="2025-11-04T04:56:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 04:56:07.881857 containerd[1635]: time="2025-11-04T04:56:07.881629047Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 4 04:56:07.910330 containerd[1635]: time="2025-11-04T04:56:07.904340450Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="25.578µs" Nov 4 04:56:07.910330 containerd[1635]: time="2025-11-04T04:56:07.904416253Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 04:56:07.910330 containerd[1635]: time="2025-11-04T04:56:07.904497956Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 04:56:07.910330 containerd[1635]: time="2025-11-04T04:56:07.904518114Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 04:56:07.910330 containerd[1635]: time="2025-11-04T04:56:07.904979168Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 04:56:07.910330 containerd[1635]: time="2025-11-04T04:56:07.905005117Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 04:56:07.910330 containerd[1635]: time="2025-11-04T04:56:07.905117548Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 04:56:07.910330 containerd[1635]: time="2025-11-04T04:56:07.905140110Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 04:56:07.910330 containerd[1635]: time="2025-11-04T04:56:07.905512839Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 04:56:07.910330 containerd[1635]: time="2025-11-04T04:56:07.905534920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 04:56:07.910330 containerd[1635]: time="2025-11-04T04:56:07.905553024Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 04:56:07.910330 containerd[1635]: time="2025-11-04T04:56:07.905565267Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 04:56:07.910803 containerd[1635]: time="2025-11-04T04:56:07.905795529Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 04:56:07.910803 containerd[1635]: time="2025-11-04T04:56:07.905817700Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 04:56:07.910803 containerd[1635]: time="2025-11-04T04:56:07.905949207Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 04:56:07.910803 containerd[1635]: time="2025-11-04T04:56:07.906606339Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 04:56:07.910803 containerd[1635]: time="2025-11-04T04:56:07.906651594Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 04:56:07.910803 containerd[1635]: time="2025-11-04T04:56:07.906664759Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 04:56:07.910803 containerd[1635]: time="2025-11-04T04:56:07.906706928Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 04:56:07.910803 containerd[1635]: time="2025-11-04T04:56:07.907046645Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 04:56:07.910803 containerd[1635]: time="2025-11-04T04:56:07.907139759Z" level=info msg="metadata content store policy set" policy=shared Nov 4 04:56:08.131902 tar[1621]: linux-amd64/README.md Nov 4 04:56:08.161974 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 04:56:08.201749 containerd[1635]: time="2025-11-04T04:56:08.201664613Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 04:56:08.202088 containerd[1635]: time="2025-11-04T04:56:08.201841866Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 04:56:08.202194 containerd[1635]: time="2025-11-04T04:56:08.202063651Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 04:56:08.202194 containerd[1635]: time="2025-11-04T04:56:08.202130667Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 04:56:08.202194 containerd[1635]: time="2025-11-04T04:56:08.202161435Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 04:56:08.202249 containerd[1635]: time="2025-11-04T04:56:08.202198484Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 04:56:08.202249 containerd[1635]: time="2025-11-04T04:56:08.202218211Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 04:56:08.202249 containerd[1635]: time="2025-11-04T04:56:08.202229753Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 04:56:08.202249 containerd[1635]: time="2025-11-04T04:56:08.202245142Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 04:56:08.202420 containerd[1635]: time="2025-11-04T04:56:08.202263145Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 04:56:08.202420 containerd[1635]: time="2025-11-04T04:56:08.202276561Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 04:56:08.202517 containerd[1635]: time="2025-11-04T04:56:08.202421021Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 04:56:08.202616 containerd[1635]: time="2025-11-04T04:56:08.202563889Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 04:56:08.202714 containerd[1635]: time="2025-11-04T04:56:08.202668896Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 04:56:08.203682 containerd[1635]: time="2025-11-04T04:56:08.203614419Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 04:56:08.203731 containerd[1635]: time="2025-11-04T04:56:08.203697835Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 04:56:08.203781 containerd[1635]: time="2025-11-04T04:56:08.203743611Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 04:56:08.203869 containerd[1635]: time="2025-11-04T04:56:08.203799837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 04:56:08.203869 containerd[1635]: time="2025-11-04T04:56:08.203835133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 04:56:08.203971 containerd[1635]: time="2025-11-04T04:56:08.203875348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 04:56:08.203971 containerd[1635]: time="2025-11-04T04:56:08.203919461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 04:56:08.204051 containerd[1635]: time="2025-11-04T04:56:08.204010602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 04:56:08.204131 containerd[1635]: time="2025-11-04T04:56:08.204039045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 04:56:08.204209 containerd[1635]: time="2025-11-04T04:56:08.204162186Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 04:56:08.204289 containerd[1635]: time="2025-11-04T04:56:08.204213613Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 04:56:08.204630 containerd[1635]: time="2025-11-04T04:56:08.204587173Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 04:56:08.205186 containerd[1635]: time="2025-11-04T04:56:08.205138166Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 04:56:08.205228 containerd[1635]: time="2025-11-04T04:56:08.205192157Z" level=info msg="Start snapshots syncer" Nov 4 04:56:08.205248 containerd[1635]: time="2025-11-04T04:56:08.205235629Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 04:56:08.206141 containerd[1635]: time="2025-11-04T04:56:08.205976869Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 04:56:08.206416 containerd[1635]: time="2025-11-04T04:56:08.206263105Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 04:56:08.206551 containerd[1635]: time="2025-11-04T04:56:08.206502053Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 04:56:08.206858 containerd[1635]: time="2025-11-04T04:56:08.206779804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 04:56:08.206951 containerd[1635]: time="2025-11-04T04:56:08.206858702Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 04:56:08.206951 containerd[1635]: time="2025-11-04T04:56:08.206936748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 04:56:08.207071 containerd[1635]: time="2025-11-04T04:56:08.206978336Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 04:56:08.207071 containerd[1635]: time="2025-11-04T04:56:08.207055291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 04:56:08.207145 containerd[1635]: time="2025-11-04T04:56:08.207091909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 04:56:08.207167 containerd[1635]: time="2025-11-04T04:56:08.207150159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 04:56:08.207201 containerd[1635]: time="2025-11-04T04:56:08.207168182Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 04:56:08.207239 containerd[1635]: time="2025-11-04T04:56:08.207199661Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 04:56:08.207358 containerd[1635]: time="2025-11-04T04:56:08.207309267Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 04:56:08.207402 containerd[1635]: time="2025-11-04T04:56:08.207355804Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 04:56:08.207402 containerd[1635]: time="2025-11-04T04:56:08.207373658Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 04:56:08.207494 containerd[1635]: time="2025-11-04T04:56:08.207397763Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 04:56:08.207494 containerd[1635]: time="2025-11-04T04:56:08.207423381Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 04:56:08.207494 containerd[1635]: time="2025-11-04T04:56:08.207457906Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 04:56:08.207738 containerd[1635]: time="2025-11-04T04:56:08.207499944Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 04:56:08.207738 containerd[1635]: time="2025-11-04T04:56:08.207619549Z" level=info msg="runtime interface created" Nov 4 04:56:08.207738 containerd[1635]: time="2025-11-04T04:56:08.207637593Z" level=info msg="created NRI interface" Nov 4 04:56:08.207738 containerd[1635]: time="2025-11-04T04:56:08.207654935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 04:56:08.207920 containerd[1635]: time="2025-11-04T04:56:08.207702404Z" level=info msg="Connect containerd service" Nov 4 04:56:08.207920 containerd[1635]: time="2025-11-04T04:56:08.207838149Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 04:56:08.210198 containerd[1635]: time="2025-11-04T04:56:08.210093638Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 04:56:08.511416 containerd[1635]: time="2025-11-04T04:56:08.511310800Z" level=info msg="Start subscribing containerd event" Nov 4 04:56:08.511568 containerd[1635]: time="2025-11-04T04:56:08.511447566Z" level=info msg="Start recovering state" Nov 4 04:56:08.511712 containerd[1635]: time="2025-11-04T04:56:08.511692566Z" level=info msg="Start event monitor" Nov 4 04:56:08.511740 containerd[1635]: time="2025-11-04T04:56:08.511724856Z" level=info msg="Start cni network conf syncer for default" Nov 4 04:56:08.511740 containerd[1635]: time="2025-11-04T04:56:08.511737319Z" level=info msg="Start streaming server" Nov 4 04:56:08.511786 containerd[1635]: time="2025-11-04T04:56:08.511753550Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 04:56:08.511786 containerd[1635]: time="2025-11-04T04:56:08.511763048Z" level=info msg="runtime interface starting up..." Nov 4 04:56:08.511786 containerd[1635]: time="2025-11-04T04:56:08.511773527Z" level=info msg="starting plugins..." Nov 4 04:56:08.511879 containerd[1635]: time="2025-11-04T04:56:08.511793405Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 04:56:08.513340 containerd[1635]: time="2025-11-04T04:56:08.513272207Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 04:56:08.513534 containerd[1635]: time="2025-11-04T04:56:08.513437056Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 04:56:08.513566 containerd[1635]: time="2025-11-04T04:56:08.513554386Z" level=info msg="containerd successfully booted in 0.634978s" Nov 4 04:56:08.513825 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 04:56:09.638649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:56:09.641334 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 04:56:09.643411 systemd[1]: Startup finished in 3.987s (kernel) + 8.491s (initrd) + 6.775s (userspace) = 19.254s. Nov 4 04:56:09.664456 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:56:10.466291 kubelet[1734]: E1104 04:56:10.466188 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:56:10.470502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:56:10.470687 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:56:10.471152 systemd[1]: kubelet.service: Consumed 2.546s CPU time, 265.4M memory peak. Nov 4 04:56:16.547486 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 04:56:16.550627 systemd[1]: Started sshd@0-10.0.0.39:22-10.0.0.1:57832.service - OpenSSH per-connection server daemon (10.0.0.1:57832). Nov 4 04:56:16.642866 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 57832 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:56:16.645602 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:56:16.653435 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 04:56:16.654802 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 04:56:16.661854 systemd-logind[1608]: New session 1 of user core. Nov 4 04:56:16.678637 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 04:56:16.682181 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 04:56:16.701907 (systemd)[1752]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 04:56:16.705755 systemd-logind[1608]: New session c1 of user core. Nov 4 04:56:16.878067 systemd[1752]: Queued start job for default target default.target. Nov 4 04:56:16.896847 systemd[1752]: Created slice app.slice - User Application Slice. Nov 4 04:56:16.896878 systemd[1752]: Reached target paths.target - Paths. Nov 4 04:56:16.896920 systemd[1752]: Reached target timers.target - Timers. Nov 4 04:56:16.898521 systemd[1752]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 04:56:16.911369 systemd[1752]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 04:56:16.911545 systemd[1752]: Reached target sockets.target - Sockets. Nov 4 04:56:16.911600 systemd[1752]: Reached target basic.target - Basic System. Nov 4 04:56:16.911668 systemd[1752]: Reached target default.target - Main User Target. Nov 4 04:56:16.911712 systemd[1752]: Startup finished in 196ms. Nov 4 04:56:16.911943 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 04:56:16.913783 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 04:56:16.928802 systemd[1]: Started sshd@1-10.0.0.39:22-10.0.0.1:57840.service - OpenSSH per-connection server daemon (10.0.0.1:57840). Nov 4 04:56:17.002880 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 57840 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:56:17.004787 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:56:17.009697 systemd-logind[1608]: New session 2 of user core. Nov 4 04:56:17.020341 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 04:56:17.036179 sshd[1766]: Connection closed by 10.0.0.1 port 57840 Nov 4 04:56:17.036532 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Nov 4 04:56:17.052062 systemd[1]: sshd@1-10.0.0.39:22-10.0.0.1:57840.service: Deactivated successfully. Nov 4 04:56:17.053864 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 04:56:17.054597 systemd-logind[1608]: Session 2 logged out. Waiting for processes to exit. Nov 4 04:56:17.057451 systemd[1]: Started sshd@2-10.0.0.39:22-10.0.0.1:57848.service - OpenSSH per-connection server daemon (10.0.0.1:57848). Nov 4 04:56:17.058214 systemd-logind[1608]: Removed session 2. Nov 4 04:56:17.114791 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 57848 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:56:17.116307 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:56:17.120828 systemd-logind[1608]: New session 3 of user core. Nov 4 04:56:17.131244 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 04:56:17.140520 sshd[1776]: Connection closed by 10.0.0.1 port 57848 Nov 4 04:56:17.140885 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Nov 4 04:56:17.154970 systemd[1]: sshd@2-10.0.0.39:22-10.0.0.1:57848.service: Deactivated successfully. Nov 4 04:56:17.157291 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 04:56:17.158247 systemd-logind[1608]: Session 3 logged out. Waiting for processes to exit. Nov 4 04:56:17.161697 systemd[1]: Started sshd@3-10.0.0.39:22-10.0.0.1:57850.service - OpenSSH per-connection server daemon (10.0.0.1:57850). Nov 4 04:56:17.162464 systemd-logind[1608]: Removed session 3. Nov 4 04:56:17.225402 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 57850 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:56:17.227087 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:56:17.233583 systemd-logind[1608]: New session 4 of user core. Nov 4 04:56:17.243289 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 04:56:17.258469 sshd[1785]: Connection closed by 10.0.0.1 port 57850 Nov 4 04:56:17.258863 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Nov 4 04:56:17.272928 systemd[1]: sshd@3-10.0.0.39:22-10.0.0.1:57850.service: Deactivated successfully. Nov 4 04:56:17.275345 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 04:56:17.276403 systemd-logind[1608]: Session 4 logged out. Waiting for processes to exit. Nov 4 04:56:17.280234 systemd[1]: Started sshd@4-10.0.0.39:22-10.0.0.1:57866.service - OpenSSH per-connection server daemon (10.0.0.1:57866). Nov 4 04:56:17.281265 systemd-logind[1608]: Removed session 4. Nov 4 04:56:17.328936 sshd[1791]: Accepted publickey for core from 10.0.0.1 port 57866 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:56:17.330440 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:56:17.334832 systemd-logind[1608]: New session 5 of user core. Nov 4 04:56:17.348303 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 04:56:17.374610 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 04:56:17.375009 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:56:17.389745 sudo[1795]: pam_unix(sudo:session): session closed for user root Nov 4 04:56:17.391878 sshd[1794]: Connection closed by 10.0.0.1 port 57866 Nov 4 04:56:17.392262 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Nov 4 04:56:17.406431 systemd[1]: sshd@4-10.0.0.39:22-10.0.0.1:57866.service: Deactivated successfully. Nov 4 04:56:17.408974 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 04:56:17.409952 systemd-logind[1608]: Session 5 logged out. Waiting for processes to exit. Nov 4 04:56:17.415109 systemd[1]: Started sshd@5-10.0.0.39:22-10.0.0.1:57876.service - OpenSSH per-connection server daemon (10.0.0.1:57876). Nov 4 04:56:17.417052 systemd-logind[1608]: Removed session 5. Nov 4 04:56:17.478986 sshd[1801]: Accepted publickey for core from 10.0.0.1 port 57876 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:56:17.482467 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:56:17.491488 systemd-logind[1608]: New session 6 of user core. Nov 4 04:56:17.497609 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 04:56:17.520458 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 04:56:17.521001 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:56:17.532912 sudo[1806]: pam_unix(sudo:session): session closed for user root Nov 4 04:56:17.544558 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 04:56:17.544991 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:56:17.559251 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 04:56:17.636546 augenrules[1828]: No rules Nov 4 04:56:17.638493 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 04:56:17.638804 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 04:56:17.640327 sudo[1805]: pam_unix(sudo:session): session closed for user root Nov 4 04:56:17.646794 sshd[1804]: Connection closed by 10.0.0.1 port 57876 Nov 4 04:56:17.646902 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Nov 4 04:56:17.663915 systemd[1]: sshd@5-10.0.0.39:22-10.0.0.1:57876.service: Deactivated successfully. Nov 4 04:56:17.666335 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 04:56:17.667415 systemd-logind[1608]: Session 6 logged out. Waiting for processes to exit. Nov 4 04:56:17.671037 systemd[1]: Started sshd@6-10.0.0.39:22-10.0.0.1:57890.service - OpenSSH per-connection server daemon (10.0.0.1:57890). Nov 4 04:56:17.671823 systemd-logind[1608]: Removed session 6. Nov 4 04:56:17.747706 sshd[1837]: Accepted publickey for core from 10.0.0.1 port 57890 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:56:17.749340 sshd-session[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:56:17.754658 systemd-logind[1608]: New session 7 of user core. Nov 4 04:56:17.768264 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 04:56:17.782338 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 04:56:17.782663 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:56:18.455723 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 04:56:18.473468 (dockerd)[1861]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 04:56:19.449569 dockerd[1861]: time="2025-11-04T04:56:19.449485975Z" level=info msg="Starting up" Nov 4 04:56:19.450355 dockerd[1861]: time="2025-11-04T04:56:19.450329447Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 04:56:19.502803 dockerd[1861]: time="2025-11-04T04:56:19.502706624Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 04:56:19.945392 dockerd[1861]: time="2025-11-04T04:56:19.945308222Z" level=info msg="Loading containers: start." Nov 4 04:56:20.062162 kernel: Initializing XFRM netlink socket Nov 4 04:56:20.472472 systemd-networkd[1531]: docker0: Link UP Nov 4 04:56:20.478895 dockerd[1861]: time="2025-11-04T04:56:20.478831408Z" level=info msg="Loading containers: done." Nov 4 04:56:20.490311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 04:56:20.494025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:56:20.503878 dockerd[1861]: time="2025-11-04T04:56:20.503800054Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 04:56:20.504089 dockerd[1861]: time="2025-11-04T04:56:20.503988968Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 04:56:20.504153 dockerd[1861]: time="2025-11-04T04:56:20.504134180Z" level=info msg="Initializing buildkit" Nov 4 04:56:20.552894 dockerd[1861]: time="2025-11-04T04:56:20.552818083Z" level=info msg="Completed buildkit initialization" Nov 4 04:56:20.561338 dockerd[1861]: time="2025-11-04T04:56:20.561241416Z" level=info msg="Daemon has completed initialization" Nov 4 04:56:20.561338 dockerd[1861]: time="2025-11-04T04:56:20.561312920Z" level=info msg="API listen on /run/docker.sock" Nov 4 04:56:20.562120 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 04:56:20.950548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:56:20.969644 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:56:21.050069 kubelet[2088]: E1104 04:56:21.049944 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:56:21.086809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:56:21.087084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:56:21.087721 systemd[1]: kubelet.service: Consumed 488ms CPU time, 110.7M memory peak. Nov 4 04:56:21.831677 containerd[1635]: time="2025-11-04T04:56:21.831605539Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 4 04:56:23.508841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2788109758.mount: Deactivated successfully. Nov 4 04:56:25.337505 containerd[1635]: time="2025-11-04T04:56:25.336514763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:25.337973 containerd[1635]: time="2025-11-04T04:56:25.337843003Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28132985" Nov 4 04:56:25.338914 containerd[1635]: time="2025-11-04T04:56:25.338838269Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:25.342094 containerd[1635]: time="2025-11-04T04:56:25.342031907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:25.344270 containerd[1635]: time="2025-11-04T04:56:25.344211514Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 3.512545192s" Nov 4 04:56:25.344270 containerd[1635]: time="2025-11-04T04:56:25.344264002Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 4 04:56:25.345038 containerd[1635]: time="2025-11-04T04:56:25.344960478Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 4 04:56:28.836334 containerd[1635]: time="2025-11-04T04:56:28.836221822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:28.910377 containerd[1635]: time="2025-11-04T04:56:28.910284019Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24778872" Nov 4 04:56:28.982468 containerd[1635]: time="2025-11-04T04:56:28.982387413Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:29.066907 containerd[1635]: time="2025-11-04T04:56:29.066838960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:29.067968 containerd[1635]: time="2025-11-04T04:56:29.067917201Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 3.722883285s" Nov 4 04:56:29.067968 containerd[1635]: time="2025-11-04T04:56:29.067955723Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 4 04:56:29.068652 containerd[1635]: time="2025-11-04T04:56:29.068632482Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 4 04:56:31.240343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 4 04:56:31.242083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:56:31.486913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:56:31.502367 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:56:31.917869 kubelet[2173]: E1104 04:56:31.917698 2173 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:56:31.922200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:56:31.922424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:56:31.922867 systemd[1]: kubelet.service: Consumed 308ms CPU time, 108.7M memory peak. Nov 4 04:56:34.085248 containerd[1635]: time="2025-11-04T04:56:34.085197140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:34.086337 containerd[1635]: time="2025-11-04T04:56:34.086310307Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19169517" Nov 4 04:56:34.088119 containerd[1635]: time="2025-11-04T04:56:34.088079745Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:34.093469 containerd[1635]: time="2025-11-04T04:56:34.093434305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:34.094363 containerd[1635]: time="2025-11-04T04:56:34.094334532Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 5.025675751s" Nov 4 04:56:34.094363 containerd[1635]: time="2025-11-04T04:56:34.094367815Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 4 04:56:34.094951 containerd[1635]: time="2025-11-04T04:56:34.094927164Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 4 04:56:36.681625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3880023970.mount: Deactivated successfully. Nov 4 04:56:37.426986 containerd[1635]: time="2025-11-04T04:56:37.426375378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:37.430313 containerd[1635]: time="2025-11-04T04:56:37.430243140Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30920484" Nov 4 04:56:37.432300 containerd[1635]: time="2025-11-04T04:56:37.432250805Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:37.434682 containerd[1635]: time="2025-11-04T04:56:37.434544315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:37.435204 containerd[1635]: time="2025-11-04T04:56:37.435154379Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 3.340193352s" Nov 4 04:56:37.435204 containerd[1635]: time="2025-11-04T04:56:37.435199183Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 4 04:56:37.435878 containerd[1635]: time="2025-11-04T04:56:37.435844272Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 4 04:56:41.272958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623096402.mount: Deactivated successfully. Nov 4 04:56:41.990483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 4 04:56:41.992461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:56:42.761301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:56:42.784463 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:56:42.937275 containerd[1635]: time="2025-11-04T04:56:42.937125248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:42.944837 containerd[1635]: time="2025-11-04T04:56:42.944761141Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18433800" Nov 4 04:56:42.948726 containerd[1635]: time="2025-11-04T04:56:42.948631264Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:42.953146 kubelet[2254]: E1104 04:56:42.952474 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:56:42.953473 containerd[1635]: time="2025-11-04T04:56:42.952612412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:42.953608 containerd[1635]: time="2025-11-04T04:56:42.953560054Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 5.517681426s" Nov 4 04:56:42.953676 containerd[1635]: time="2025-11-04T04:56:42.953605647Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 4 04:56:42.954387 containerd[1635]: time="2025-11-04T04:56:42.954355223Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 04:56:42.957087 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:56:42.957327 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:56:42.957756 systemd[1]: kubelet.service: Consumed 266ms CPU time, 110.9M memory peak. Nov 4 04:56:43.655283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2016547364.mount: Deactivated successfully. Nov 4 04:56:44.005030 containerd[1635]: time="2025-11-04T04:56:44.004923693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:56:44.047907 containerd[1635]: time="2025-11-04T04:56:44.047813520Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 04:56:44.114541 containerd[1635]: time="2025-11-04T04:56:44.114446586Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:56:44.183266 containerd[1635]: time="2025-11-04T04:56:44.183204036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:56:44.183969 containerd[1635]: time="2025-11-04T04:56:44.183912990Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.229520787s" Nov 4 04:56:44.183969 containerd[1635]: time="2025-11-04T04:56:44.183963033Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 4 04:56:44.184525 containerd[1635]: time="2025-11-04T04:56:44.184497391Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 4 04:56:45.799362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1047146204.mount: Deactivated successfully. Nov 4 04:56:48.595773 containerd[1635]: time="2025-11-04T04:56:48.595688541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:48.597933 containerd[1635]: time="2025-11-04T04:56:48.597892784Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57570184" Nov 4 04:56:48.599529 containerd[1635]: time="2025-11-04T04:56:48.599473679Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:48.604259 containerd[1635]: time="2025-11-04T04:56:48.604178175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:48.605303 containerd[1635]: time="2025-11-04T04:56:48.605266215Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.420729921s" Nov 4 04:56:48.605303 containerd[1635]: time="2025-11-04T04:56:48.605301862Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 4 04:56:51.336814 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:56:51.337006 systemd[1]: kubelet.service: Consumed 266ms CPU time, 110.9M memory peak. Nov 4 04:56:51.339528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:56:51.367189 systemd[1]: Reload requested from client PID 2351 ('systemctl') (unit session-7.scope)... Nov 4 04:56:51.367212 systemd[1]: Reloading... Nov 4 04:56:51.469152 zram_generator::config[2397]: No configuration found. Nov 4 04:56:51.798556 systemd[1]: Reloading finished in 430 ms. Nov 4 04:56:51.886195 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 04:56:51.886319 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 04:56:51.886730 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:56:51.886790 systemd[1]: kubelet.service: Consumed 261ms CPU time, 98.4M memory peak. Nov 4 04:56:51.888913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:56:52.088576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:56:52.094594 (kubelet)[2442]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 04:56:52.198033 kubelet[2442]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:56:52.198033 kubelet[2442]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 04:56:52.198033 kubelet[2442]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:56:52.198608 kubelet[2442]: I1104 04:56:52.198145 2442 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 04:56:52.421764 update_engine[1611]: I20251104 04:56:52.421545 1611 update_attempter.cc:509] Updating boot flags... Nov 4 04:56:53.877960 kubelet[2442]: I1104 04:56:53.877883 2442 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 4 04:56:53.877960 kubelet[2442]: I1104 04:56:53.877939 2442 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 04:56:53.878546 kubelet[2442]: I1104 04:56:53.878383 2442 server.go:954] "Client rotation is on, will bootstrap in background" Nov 4 04:56:54.155833 kubelet[2442]: E1104 04:56:54.155649 2442 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:56:54.157219 kubelet[2442]: I1104 04:56:54.157168 2442 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 04:56:54.176503 kubelet[2442]: I1104 04:56:54.176445 2442 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 04:56:54.193485 kubelet[2442]: I1104 04:56:54.193434 2442 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 04:56:54.197873 kubelet[2442]: I1104 04:56:54.197786 2442 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 04:56:54.198128 kubelet[2442]: I1104 04:56:54.197869 2442 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 04:56:54.198543 kubelet[2442]: I1104 04:56:54.198506 2442 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 04:56:54.198811 kubelet[2442]: I1104 04:56:54.198541 2442 container_manager_linux.go:304] "Creating device plugin manager" Nov 4 04:56:54.199209 kubelet[2442]: I1104 04:56:54.198866 2442 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:56:54.206012 kubelet[2442]: I1104 04:56:54.204795 2442 kubelet.go:446] "Attempting to sync node with API server" Nov 4 04:56:54.206012 kubelet[2442]: I1104 04:56:54.204847 2442 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 04:56:54.206012 kubelet[2442]: I1104 04:56:54.204883 2442 kubelet.go:352] "Adding apiserver pod source" Nov 4 04:56:54.206012 kubelet[2442]: I1104 04:56:54.204902 2442 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 04:56:54.225161 kubelet[2442]: W1104 04:56:54.223162 2442 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Nov 4 04:56:54.225161 kubelet[2442]: E1104 04:56:54.223238 2442 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:56:54.225161 kubelet[2442]: I1104 04:56:54.223332 2442 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 04:56:54.225161 kubelet[2442]: I1104 04:56:54.223736 2442 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 4 04:56:54.225161 kubelet[2442]: W1104 04:56:54.223953 2442 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Nov 4 04:56:54.225161 kubelet[2442]: E1104 04:56:54.223993 2442 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:56:54.225161 kubelet[2442]: W1104 04:56:54.225088 2442 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 04:56:54.228288 kubelet[2442]: I1104 04:56:54.228246 2442 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 04:56:54.228364 kubelet[2442]: I1104 04:56:54.228299 2442 server.go:1287] "Started kubelet" Nov 4 04:56:54.229476 kubelet[2442]: I1104 04:56:54.228506 2442 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 04:56:54.230658 kubelet[2442]: I1104 04:56:54.230620 2442 server.go:479] "Adding debug handlers to kubelet server" Nov 4 04:56:54.231843 kubelet[2442]: I1104 04:56:54.230816 2442 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 04:56:54.234504 kubelet[2442]: I1104 04:56:54.234044 2442 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 04:56:54.234504 kubelet[2442]: I1104 04:56:54.233558 2442 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 04:56:54.234504 kubelet[2442]: I1104 04:56:54.233437 2442 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 04:56:54.236396 kubelet[2442]: I1104 04:56:54.236185 2442 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 04:56:54.237882 kubelet[2442]: E1104 04:56:54.237851 2442 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 04:56:54.237882 kubelet[2442]: E1104 04:56:54.236988 2442 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="200ms" Nov 4 04:56:54.237882 kubelet[2442]: I1104 04:56:54.237294 2442 factory.go:221] Registration of the systemd container factory successfully Nov 4 04:56:54.237988 kubelet[2442]: I1104 04:56:54.237940 2442 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 04:56:54.238267 kubelet[2442]: I1104 04:56:54.238251 2442 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 04:56:54.239088 kubelet[2442]: E1104 04:56:54.237498 2442 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.39:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.39:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874b4dcb4d36c51 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 04:56:54.228266065 +0000 UTC m=+2.129067526,LastTimestamp:2025-11-04 04:56:54.228266065 +0000 UTC m=+2.129067526,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 04:56:54.239228 kubelet[2442]: W1104 04:56:54.237729 2442 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Nov 4 04:56:54.239468 kubelet[2442]: E1104 04:56:54.236875 2442 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:56:54.239536 kubelet[2442]: I1104 04:56:54.239513 2442 reconciler.go:26] "Reconciler: start to sync state" Nov 4 04:56:54.239986 kubelet[2442]: E1104 04:56:54.239652 2442 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:56:54.240912 kubelet[2442]: I1104 04:56:54.240851 2442 factory.go:221] Registration of the containerd container factory successfully Nov 4 04:56:54.265134 kubelet[2442]: I1104 04:56:54.265013 2442 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 04:56:54.265134 kubelet[2442]: I1104 04:56:54.265034 2442 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 04:56:54.265134 kubelet[2442]: I1104 04:56:54.265065 2442 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:56:54.267391 kubelet[2442]: I1104 04:56:54.267306 2442 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 4 04:56:54.268953 kubelet[2442]: I1104 04:56:54.268920 2442 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 4 04:56:54.269016 kubelet[2442]: I1104 04:56:54.268969 2442 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 4 04:56:54.269016 kubelet[2442]: I1104 04:56:54.269007 2442 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 04:56:54.269071 kubelet[2442]: I1104 04:56:54.269019 2442 kubelet.go:2382] "Starting kubelet main sync loop" Nov 4 04:56:54.269195 kubelet[2442]: E1104 04:56:54.269161 2442 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 04:56:54.340610 kubelet[2442]: E1104 04:56:54.340540 2442 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:56:54.370233 kubelet[2442]: E1104 04:56:54.370169 2442 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 4 04:56:54.426906 kubelet[2442]: W1104 04:56:54.426703 2442 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Nov 4 04:56:54.426906 kubelet[2442]: E1104 04:56:54.426815 2442 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:56:54.431833 kubelet[2442]: I1104 04:56:54.431783 2442 policy_none.go:49] "None policy: Start" Nov 4 04:56:54.431898 kubelet[2442]: I1104 04:56:54.431839 2442 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 04:56:54.431898 kubelet[2442]: I1104 04:56:54.431861 2442 state_mem.go:35] "Initializing new in-memory state store" Nov 4 04:56:54.438457 kubelet[2442]: E1104 04:56:54.438415 2442 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="400ms" Nov 4 04:56:54.438880 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 04:56:54.440721 kubelet[2442]: E1104 04:56:54.440692 2442 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:56:54.453858 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 04:56:54.457957 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 04:56:54.473411 kubelet[2442]: I1104 04:56:54.473351 2442 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 4 04:56:54.473667 kubelet[2442]: I1104 04:56:54.473648 2442 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 04:56:54.473900 kubelet[2442]: I1104 04:56:54.473675 2442 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 04:56:54.474291 kubelet[2442]: I1104 04:56:54.474268 2442 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 04:56:54.475733 kubelet[2442]: E1104 04:56:54.475672 2442 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 04:56:54.475843 kubelet[2442]: E1104 04:56:54.475747 2442 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 4 04:56:54.575206 kubelet[2442]: I1104 04:56:54.575151 2442 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:56:54.575565 kubelet[2442]: E1104 04:56:54.575536 2442 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Nov 4 04:56:54.581209 systemd[1]: Created slice kubepods-burstable-podf7a920be43f6bce1bd0d937a8ac37523.slice - libcontainer container kubepods-burstable-podf7a920be43f6bce1bd0d937a8ac37523.slice. Nov 4 04:56:54.606769 kubelet[2442]: E1104 04:56:54.606724 2442 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:56:54.609264 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 4 04:56:54.611654 kubelet[2442]: E1104 04:56:54.611611 2442 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:56:54.614654 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 4 04:56:54.616486 kubelet[2442]: E1104 04:56:54.616432 2442 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:56:54.641277 kubelet[2442]: I1104 04:56:54.641205 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7a920be43f6bce1bd0d937a8ac37523-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f7a920be43f6bce1bd0d937a8ac37523\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:56:54.641277 kubelet[2442]: I1104 04:56:54.641269 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:56:54.641277 kubelet[2442]: I1104 04:56:54.641305 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:56:54.641579 kubelet[2442]: I1104 04:56:54.641322 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 4 04:56:54.641579 kubelet[2442]: I1104 04:56:54.641338 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7a920be43f6bce1bd0d937a8ac37523-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f7a920be43f6bce1bd0d937a8ac37523\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:56:54.641579 kubelet[2442]: I1104 04:56:54.641352 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7a920be43f6bce1bd0d937a8ac37523-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f7a920be43f6bce1bd0d937a8ac37523\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:56:54.641579 kubelet[2442]: I1104 04:56:54.641385 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:56:54.641579 kubelet[2442]: I1104 04:56:54.641407 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:56:54.641813 kubelet[2442]: I1104 04:56:54.641426 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:56:54.777952 kubelet[2442]: I1104 04:56:54.777909 2442 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:56:54.778363 kubelet[2442]: E1104 04:56:54.778321 2442 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Nov 4 04:56:54.839236 kubelet[2442]: E1104 04:56:54.839162 2442 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="800ms" Nov 4 04:56:54.907949 kubelet[2442]: E1104 04:56:54.907870 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:56:54.908815 containerd[1635]: time="2025-11-04T04:56:54.908762796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f7a920be43f6bce1bd0d937a8ac37523,Namespace:kube-system,Attempt:0,}" Nov 4 04:56:54.913052 kubelet[2442]: E1104 04:56:54.913013 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:56:54.913582 containerd[1635]: time="2025-11-04T04:56:54.913534574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 4 04:56:54.917807 kubelet[2442]: E1104 04:56:54.917771 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:56:54.918150 containerd[1635]: time="2025-11-04T04:56:54.918085972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 4 04:56:55.180289 kubelet[2442]: I1104 04:56:55.180166 2442 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:56:55.180629 kubelet[2442]: E1104 04:56:55.180587 2442 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Nov 4 04:56:55.569136 containerd[1635]: time="2025-11-04T04:56:55.568703149Z" level=info msg="connecting to shim d977787d97718af7cf9d56a8a8361c0acf69f05cac034d628683b07fc1f0a43b" address="unix:///run/containerd/s/2e5bdd3dc825147b0830567a9cd330d44598e4f3e086615d1cdc7ab6dba3093e" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:56:55.569305 containerd[1635]: time="2025-11-04T04:56:55.569203902Z" level=info msg="connecting to shim 19b936545023a7dfec3007cf21cc090e30007904296bd8b8ed84cf977c8f97b9" address="unix:///run/containerd/s/74e258be415d52edee7f1fbbd6222b78dfffe74fa050a0ad638cba614262e890" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:56:55.573567 containerd[1635]: time="2025-11-04T04:56:55.573480440Z" level=info msg="connecting to shim 83083f47053dba082bf86dc300089232530ccfcd4bb5ea3ffcf495621a131faf" address="unix:///run/containerd/s/69260936f5ff211a604d81b8889fd195ae771268d1009b4bd893fdd26ac2d39a" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:56:55.611787 systemd[1]: Started cri-containerd-d977787d97718af7cf9d56a8a8361c0acf69f05cac034d628683b07fc1f0a43b.scope - libcontainer container d977787d97718af7cf9d56a8a8361c0acf69f05cac034d628683b07fc1f0a43b. Nov 4 04:56:55.612848 kubelet[2442]: W1104 04:56:55.612386 2442 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Nov 4 04:56:55.612848 kubelet[2442]: E1104 04:56:55.612486 2442 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:56:55.619066 systemd[1]: Started cri-containerd-19b936545023a7dfec3007cf21cc090e30007904296bd8b8ed84cf977c8f97b9.scope - libcontainer container 19b936545023a7dfec3007cf21cc090e30007904296bd8b8ed84cf977c8f97b9. Nov 4 04:56:55.630827 systemd[1]: Started cri-containerd-83083f47053dba082bf86dc300089232530ccfcd4bb5ea3ffcf495621a131faf.scope - libcontainer container 83083f47053dba082bf86dc300089232530ccfcd4bb5ea3ffcf495621a131faf. Nov 4 04:56:55.641824 kubelet[2442]: E1104 04:56:55.641318 2442 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="1.6s" Nov 4 04:56:55.690295 containerd[1635]: time="2025-11-04T04:56:55.690134295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"d977787d97718af7cf9d56a8a8361c0acf69f05cac034d628683b07fc1f0a43b\"" Nov 4 04:56:55.694647 kubelet[2442]: E1104 04:56:55.694585 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:56:55.694830 containerd[1635]: time="2025-11-04T04:56:55.694737483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f7a920be43f6bce1bd0d937a8ac37523,Namespace:kube-system,Attempt:0,} returns sandbox id \"19b936545023a7dfec3007cf21cc090e30007904296bd8b8ed84cf977c8f97b9\"" Nov 4 04:56:55.695729 kubelet[2442]: E1104 04:56:55.695606 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:56:55.698701 containerd[1635]: time="2025-11-04T04:56:55.698634995Z" level=info msg="CreateContainer within sandbox \"19b936545023a7dfec3007cf21cc090e30007904296bd8b8ed84cf977c8f97b9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 04:56:55.699465 containerd[1635]: time="2025-11-04T04:56:55.699076417Z" level=info msg="CreateContainer within sandbox \"d977787d97718af7cf9d56a8a8361c0acf69f05cac034d628683b07fc1f0a43b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 04:56:55.707679 kubelet[2442]: W1104 04:56:55.707569 2442 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Nov 4 04:56:55.707679 kubelet[2442]: E1104 04:56:55.707684 2442 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:56:55.715988 containerd[1635]: time="2025-11-04T04:56:55.715921597Z" level=info msg="Container 03229023a392967de27f1e7db9f944127a98540e826821ffbf619707902cad07: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:56:55.720504 containerd[1635]: time="2025-11-04T04:56:55.720447050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"83083f47053dba082bf86dc300089232530ccfcd4bb5ea3ffcf495621a131faf\"" Nov 4 04:56:55.721421 kubelet[2442]: E1104 04:56:55.721387 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:56:55.723808 containerd[1635]: time="2025-11-04T04:56:55.723758551Z" level=info msg="CreateContainer within sandbox \"83083f47053dba082bf86dc300089232530ccfcd4bb5ea3ffcf495621a131faf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 04:56:55.728720 containerd[1635]: time="2025-11-04T04:56:55.728647571Z" level=info msg="Container e16791bf7a9d7458566433703d8700557da56feb44d0a2470d19d1e1227f95a4: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:56:55.732171 containerd[1635]: time="2025-11-04T04:56:55.732086168Z" level=info msg="CreateContainer within sandbox \"19b936545023a7dfec3007cf21cc090e30007904296bd8b8ed84cf977c8f97b9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"03229023a392967de27f1e7db9f944127a98540e826821ffbf619707902cad07\"" Nov 4 04:56:55.734258 containerd[1635]: time="2025-11-04T04:56:55.734214314Z" level=info msg="StartContainer for \"03229023a392967de27f1e7db9f944127a98540e826821ffbf619707902cad07\"" Nov 4 04:56:55.735635 containerd[1635]: time="2025-11-04T04:56:55.735587192Z" level=info msg="connecting to shim 03229023a392967de27f1e7db9f944127a98540e826821ffbf619707902cad07" address="unix:///run/containerd/s/74e258be415d52edee7f1fbbd6222b78dfffe74fa050a0ad638cba614262e890" protocol=ttrpc version=3 Nov 4 04:56:55.738413 containerd[1635]: time="2025-11-04T04:56:55.738384805Z" level=info msg="Container 994db91bef229284fafbfcb31b192fb8ee2d344f6cea2a9f294c45d4d3c5a88b: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:56:55.742040 containerd[1635]: time="2025-11-04T04:56:55.742002086Z" level=info msg="CreateContainer within sandbox \"d977787d97718af7cf9d56a8a8361c0acf69f05cac034d628683b07fc1f0a43b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e16791bf7a9d7458566433703d8700557da56feb44d0a2470d19d1e1227f95a4\"" Nov 4 04:56:55.742445 containerd[1635]: time="2025-11-04T04:56:55.742423892Z" level=info msg="StartContainer for \"e16791bf7a9d7458566433703d8700557da56feb44d0a2470d19d1e1227f95a4\"" Nov 4 04:56:55.743678 containerd[1635]: time="2025-11-04T04:56:55.743631903Z" level=info msg="connecting to shim e16791bf7a9d7458566433703d8700557da56feb44d0a2470d19d1e1227f95a4" address="unix:///run/containerd/s/2e5bdd3dc825147b0830567a9cd330d44598e4f3e086615d1cdc7ab6dba3093e" protocol=ttrpc version=3 Nov 4 04:56:55.747048 containerd[1635]: time="2025-11-04T04:56:55.747004147Z" level=info msg="CreateContainer within sandbox \"83083f47053dba082bf86dc300089232530ccfcd4bb5ea3ffcf495621a131faf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"994db91bef229284fafbfcb31b192fb8ee2d344f6cea2a9f294c45d4d3c5a88b\"" Nov 4 04:56:55.747849 containerd[1635]: time="2025-11-04T04:56:55.747825106Z" level=info msg="StartContainer for \"994db91bef229284fafbfcb31b192fb8ee2d344f6cea2a9f294c45d4d3c5a88b\"" Nov 4 04:56:55.750827 containerd[1635]: time="2025-11-04T04:56:55.750766578Z" level=info msg="connecting to shim 994db91bef229284fafbfcb31b192fb8ee2d344f6cea2a9f294c45d4d3c5a88b" address="unix:///run/containerd/s/69260936f5ff211a604d81b8889fd195ae771268d1009b4bd893fdd26ac2d39a" protocol=ttrpc version=3 Nov 4 04:56:55.756519 systemd[1]: Started cri-containerd-03229023a392967de27f1e7db9f944127a98540e826821ffbf619707902cad07.scope - libcontainer container 03229023a392967de27f1e7db9f944127a98540e826821ffbf619707902cad07. Nov 4 04:56:55.770362 systemd[1]: Started cri-containerd-e16791bf7a9d7458566433703d8700557da56feb44d0a2470d19d1e1227f95a4.scope - libcontainer container e16791bf7a9d7458566433703d8700557da56feb44d0a2470d19d1e1227f95a4. Nov 4 04:56:55.772073 kubelet[2442]: W1104 04:56:55.772006 2442 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Nov 4 04:56:55.772073 kubelet[2442]: E1104 04:56:55.772076 2442 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:56:55.775259 systemd[1]: Started cri-containerd-994db91bef229284fafbfcb31b192fb8ee2d344f6cea2a9f294c45d4d3c5a88b.scope - libcontainer container 994db91bef229284fafbfcb31b192fb8ee2d344f6cea2a9f294c45d4d3c5a88b. Nov 4 04:56:55.842460 containerd[1635]: time="2025-11-04T04:56:55.842217789Z" level=info msg="StartContainer for \"994db91bef229284fafbfcb31b192fb8ee2d344f6cea2a9f294c45d4d3c5a88b\" returns successfully" Nov 4 04:56:55.844239 containerd[1635]: time="2025-11-04T04:56:55.844198119Z" level=info msg="StartContainer for \"03229023a392967de27f1e7db9f944127a98540e826821ffbf619707902cad07\" returns successfully" Nov 4 04:56:55.853734 containerd[1635]: time="2025-11-04T04:56:55.853693212Z" level=info msg="StartContainer for \"e16791bf7a9d7458566433703d8700557da56feb44d0a2470d19d1e1227f95a4\" returns successfully" Nov 4 04:56:55.865789 kubelet[2442]: W1104 04:56:55.864851 2442 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Nov 4 04:56:55.865789 kubelet[2442]: E1104 04:56:55.865188 2442 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Nov 4 04:56:55.982551 kubelet[2442]: I1104 04:56:55.982456 2442 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:56:55.983062 kubelet[2442]: E1104 04:56:55.982899 2442 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Nov 4 04:56:56.286986 kubelet[2442]: E1104 04:56:56.286702 2442 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:56:56.286986 kubelet[2442]: E1104 04:56:56.286923 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:56:56.293468 kubelet[2442]: E1104 04:56:56.293215 2442 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:56:56.293468 kubelet[2442]: E1104 04:56:56.293398 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:56:56.297153 kubelet[2442]: E1104 04:56:56.297124 2442 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:56:56.300451 kubelet[2442]: E1104 04:56:56.299659 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:56:57.294348 kubelet[2442]: E1104 04:56:57.294299 2442 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:56:57.294756 kubelet[2442]: E1104 04:56:57.294470 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:56:57.294756 kubelet[2442]: E1104 04:56:57.294513 2442 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 04:56:57.294756 kubelet[2442]: E1104 04:56:57.294689 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:56:57.341226 kubelet[2442]: E1104 04:56:57.341175 2442 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 4 04:56:57.585580 kubelet[2442]: I1104 04:56:57.585429 2442 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:56:58.002012 kubelet[2442]: I1104 04:56:58.001752 2442 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 04:56:58.002012 kubelet[2442]: E1104 04:56:58.001810 2442 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 4 04:56:58.038038 kubelet[2442]: I1104 04:56:58.037951 2442 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 04:56:58.067137 kubelet[2442]: E1104 04:56:58.067059 2442 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 04:56:58.067137 kubelet[2442]: I1104 04:56:58.067129 2442 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 04:56:58.069041 kubelet[2442]: E1104 04:56:58.068990 2442 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 04:56:58.069041 kubelet[2442]: I1104 04:56:58.069026 2442 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 04:56:58.076038 kubelet[2442]: E1104 04:56:58.075987 2442 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 04:56:58.221754 kubelet[2442]: I1104 04:56:58.221664 2442 apiserver.go:52] "Watching apiserver" Nov 4 04:56:58.239827 kubelet[2442]: I1104 04:56:58.239744 2442 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 04:56:59.572015 kubelet[2442]: I1104 04:56:59.571925 2442 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 04:56:59.603014 kubelet[2442]: E1104 04:56:59.602947 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:00.299192 kubelet[2442]: E1104 04:57:00.299157 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:00.619898 systemd[1]: Reload requested from client PID 2738 ('systemctl') (unit session-7.scope)... Nov 4 04:57:00.619925 systemd[1]: Reloading... Nov 4 04:57:00.696186 zram_generator::config[2782]: No configuration found. Nov 4 04:57:00.984786 systemd[1]: Reloading finished in 364 ms. Nov 4 04:57:01.020780 kubelet[2442]: I1104 04:57:01.020704 2442 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 04:57:01.020781 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:57:01.047830 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 04:57:01.048197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:57:01.048274 systemd[1]: kubelet.service: Consumed 1.036s CPU time, 132.7M memory peak. Nov 4 04:57:01.052161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:57:01.306878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:57:01.315435 (kubelet)[2827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 04:57:01.362657 kubelet[2827]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:57:01.362657 kubelet[2827]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 04:57:01.362657 kubelet[2827]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:57:01.362657 kubelet[2827]: I1104 04:57:01.362598 2827 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 04:57:01.369433 kubelet[2827]: I1104 04:57:01.369397 2827 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 4 04:57:01.369433 kubelet[2827]: I1104 04:57:01.369419 2827 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 04:57:01.369635 kubelet[2827]: I1104 04:57:01.369617 2827 server.go:954] "Client rotation is on, will bootstrap in background" Nov 4 04:57:01.370638 kubelet[2827]: I1104 04:57:01.370619 2827 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 4 04:57:01.372884 kubelet[2827]: I1104 04:57:01.372845 2827 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 04:57:01.377720 kubelet[2827]: I1104 04:57:01.377693 2827 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 04:57:01.382556 kubelet[2827]: I1104 04:57:01.382529 2827 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 04:57:01.382812 kubelet[2827]: I1104 04:57:01.382777 2827 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 04:57:01.382974 kubelet[2827]: I1104 04:57:01.382810 2827 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 04:57:01.383054 kubelet[2827]: I1104 04:57:01.382989 2827 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 04:57:01.383054 kubelet[2827]: I1104 04:57:01.382998 2827 container_manager_linux.go:304] "Creating device plugin manager" Nov 4 04:57:01.383054 kubelet[2827]: I1104 04:57:01.383051 2827 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:57:01.383244 kubelet[2827]: I1104 04:57:01.383225 2827 kubelet.go:446] "Attempting to sync node with API server" Nov 4 04:57:01.383278 kubelet[2827]: I1104 04:57:01.383251 2827 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 04:57:01.383278 kubelet[2827]: I1104 04:57:01.383274 2827 kubelet.go:352] "Adding apiserver pod source" Nov 4 04:57:01.383318 kubelet[2827]: I1104 04:57:01.383312 2827 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 04:57:01.384041 kubelet[2827]: I1104 04:57:01.384016 2827 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 04:57:01.384512 kubelet[2827]: I1104 04:57:01.384495 2827 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 4 04:57:01.384972 kubelet[2827]: I1104 04:57:01.384953 2827 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 04:57:01.385008 kubelet[2827]: I1104 04:57:01.384985 2827 server.go:1287] "Started kubelet" Nov 4 04:57:01.388008 kubelet[2827]: I1104 04:57:01.387948 2827 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 04:57:01.388490 kubelet[2827]: I1104 04:57:01.388466 2827 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 04:57:01.388557 kubelet[2827]: I1104 04:57:01.388525 2827 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 04:57:01.390629 kubelet[2827]: I1104 04:57:01.389569 2827 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 04:57:01.392332 kubelet[2827]: I1104 04:57:01.392279 2827 server.go:479] "Adding debug handlers to kubelet server" Nov 4 04:57:01.392966 kubelet[2827]: I1104 04:57:01.392939 2827 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 04:57:01.395938 kubelet[2827]: I1104 04:57:01.395914 2827 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 04:57:01.396244 kubelet[2827]: E1104 04:57:01.396212 2827 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 04:57:01.396723 kubelet[2827]: I1104 04:57:01.396702 2827 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 04:57:01.396899 kubelet[2827]: I1104 04:57:01.396883 2827 reconciler.go:26] "Reconciler: start to sync state" Nov 4 04:57:01.400862 kubelet[2827]: E1104 04:57:01.400233 2827 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 04:57:01.403895 kubelet[2827]: I1104 04:57:01.403856 2827 factory.go:221] Registration of the systemd container factory successfully Nov 4 04:57:01.404401 kubelet[2827]: I1104 04:57:01.404013 2827 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 04:57:01.406300 kubelet[2827]: I1104 04:57:01.406281 2827 factory.go:221] Registration of the containerd container factory successfully Nov 4 04:57:01.411222 kubelet[2827]: I1104 04:57:01.411159 2827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 4 04:57:01.412519 kubelet[2827]: I1104 04:57:01.412476 2827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 4 04:57:01.412519 kubelet[2827]: I1104 04:57:01.412508 2827 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 4 04:57:01.412519 kubelet[2827]: I1104 04:57:01.412527 2827 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 04:57:01.412708 kubelet[2827]: I1104 04:57:01.412536 2827 kubelet.go:2382] "Starting kubelet main sync loop" Nov 4 04:57:01.412708 kubelet[2827]: E1104 04:57:01.412579 2827 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 04:57:01.444865 kubelet[2827]: I1104 04:57:01.444814 2827 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 04:57:01.444865 kubelet[2827]: I1104 04:57:01.444838 2827 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 04:57:01.444865 kubelet[2827]: I1104 04:57:01.444860 2827 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:57:01.445093 kubelet[2827]: I1104 04:57:01.445064 2827 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 04:57:01.445175 kubelet[2827]: I1104 04:57:01.445085 2827 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 04:57:01.445175 kubelet[2827]: I1104 04:57:01.445163 2827 policy_none.go:49] "None policy: Start" Nov 4 04:57:01.445175 kubelet[2827]: I1104 04:57:01.445175 2827 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 04:57:01.445282 kubelet[2827]: I1104 04:57:01.445189 2827 state_mem.go:35] "Initializing new in-memory state store" Nov 4 04:57:01.445373 kubelet[2827]: I1104 04:57:01.445350 2827 state_mem.go:75] "Updated machine memory state" Nov 4 04:57:01.449048 kubelet[2827]: I1104 04:57:01.449020 2827 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 4 04:57:01.449274 kubelet[2827]: I1104 04:57:01.449253 2827 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 04:57:01.449318 kubelet[2827]: I1104 04:57:01.449276 2827 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 04:57:01.449529 kubelet[2827]: I1104 04:57:01.449514 2827 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 04:57:01.450085 kubelet[2827]: E1104 04:57:01.450060 2827 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 04:57:01.514060 kubelet[2827]: I1104 04:57:01.514002 2827 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 04:57:01.514702 kubelet[2827]: I1104 04:57:01.514418 2827 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 04:57:01.514702 kubelet[2827]: I1104 04:57:01.514663 2827 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 04:57:01.522421 kubelet[2827]: E1104 04:57:01.522335 2827 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 4 04:57:01.555685 kubelet[2827]: I1104 04:57:01.555643 2827 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 04:57:01.564602 kubelet[2827]: I1104 04:57:01.564429 2827 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 4 04:57:01.564602 kubelet[2827]: I1104 04:57:01.564542 2827 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 04:57:01.699026 kubelet[2827]: I1104 04:57:01.698948 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7a920be43f6bce1bd0d937a8ac37523-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f7a920be43f6bce1bd0d937a8ac37523\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:57:01.699026 kubelet[2827]: I1104 04:57:01.699017 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7a920be43f6bce1bd0d937a8ac37523-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f7a920be43f6bce1bd0d937a8ac37523\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:57:01.699288 kubelet[2827]: I1104 04:57:01.699060 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:57:01.699288 kubelet[2827]: I1104 04:57:01.699084 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:57:01.699288 kubelet[2827]: I1104 04:57:01.699138 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7a920be43f6bce1bd0d937a8ac37523-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f7a920be43f6bce1bd0d937a8ac37523\") " pod="kube-system/kube-apiserver-localhost" Nov 4 04:57:01.699288 kubelet[2827]: I1104 04:57:01.699160 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:57:01.699288 kubelet[2827]: I1104 04:57:01.699187 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:57:01.699429 kubelet[2827]: I1104 04:57:01.699336 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 04:57:01.699429 kubelet[2827]: I1104 04:57:01.699416 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 4 04:57:01.821741 kubelet[2827]: E1104 04:57:01.821600 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:01.822050 kubelet[2827]: E1104 04:57:01.822026 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:01.823477 kubelet[2827]: E1104 04:57:01.823437 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:02.384057 kubelet[2827]: I1104 04:57:02.383986 2827 apiserver.go:52] "Watching apiserver" Nov 4 04:57:02.397529 kubelet[2827]: I1104 04:57:02.397455 2827 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 04:57:02.430858 kubelet[2827]: I1104 04:57:02.430814 2827 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 04:57:02.431027 kubelet[2827]: I1104 04:57:02.430903 2827 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 04:57:02.431242 kubelet[2827]: E1104 04:57:02.431205 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:02.441397 kubelet[2827]: E1104 04:57:02.441317 2827 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 4 04:57:02.441677 kubelet[2827]: E1104 04:57:02.441544 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:02.442788 kubelet[2827]: E1104 04:57:02.442720 2827 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 4 04:57:02.442879 kubelet[2827]: E1104 04:57:02.442857 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:02.461457 kubelet[2827]: I1104 04:57:02.461294 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.461205347 podStartE2EDuration="1.461205347s" podCreationTimestamp="2025-11-04 04:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:57:02.456003922 +0000 UTC m=+1.136851486" watchObservedRunningTime="2025-11-04 04:57:02.461205347 +0000 UTC m=+1.142052901" Nov 4 04:57:02.480562 kubelet[2827]: I1104 04:57:02.480490 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.480465257 podStartE2EDuration="1.480465257s" podCreationTimestamp="2025-11-04 04:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:57:02.470453569 +0000 UTC m=+1.151301143" watchObservedRunningTime="2025-11-04 04:57:02.480465257 +0000 UTC m=+1.161312811" Nov 4 04:57:02.480786 kubelet[2827]: I1104 04:57:02.480604 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.480599688 podStartE2EDuration="3.480599688s" podCreationTimestamp="2025-11-04 04:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:57:02.480555015 +0000 UTC m=+1.161402569" watchObservedRunningTime="2025-11-04 04:57:02.480599688 +0000 UTC m=+1.161447242" Nov 4 04:57:03.432624 kubelet[2827]: E1104 04:57:03.432573 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:03.433139 kubelet[2827]: E1104 04:57:03.433122 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:04.434967 kubelet[2827]: E1104 04:57:04.434918 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:04.591940 kubelet[2827]: E1104 04:57:04.591879 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:06.614738 kubelet[2827]: I1104 04:57:06.614699 2827 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 04:57:06.615275 containerd[1635]: time="2025-11-04T04:57:06.615175193Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 04:57:06.615519 kubelet[2827]: I1104 04:57:06.615390 2827 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 04:57:07.490052 kubelet[2827]: E1104 04:57:07.489992 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:07.596948 systemd[1]: Created slice kubepods-besteffort-pod228baaa7_d62f_455a_b302_e8ce1e9646f6.slice - libcontainer container kubepods-besteffort-pod228baaa7_d62f_455a_b302_e8ce1e9646f6.slice. Nov 4 04:57:07.635403 kubelet[2827]: I1104 04:57:07.635332 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/228baaa7-d62f-455a-b302-e8ce1e9646f6-kube-proxy\") pod \"kube-proxy-nb2dc\" (UID: \"228baaa7-d62f-455a-b302-e8ce1e9646f6\") " pod="kube-system/kube-proxy-nb2dc" Nov 4 04:57:07.635403 kubelet[2827]: I1104 04:57:07.635408 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/228baaa7-d62f-455a-b302-e8ce1e9646f6-xtables-lock\") pod \"kube-proxy-nb2dc\" (UID: \"228baaa7-d62f-455a-b302-e8ce1e9646f6\") " pod="kube-system/kube-proxy-nb2dc" Nov 4 04:57:07.636127 kubelet[2827]: I1104 04:57:07.635434 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/228baaa7-d62f-455a-b302-e8ce1e9646f6-lib-modules\") pod \"kube-proxy-nb2dc\" (UID: \"228baaa7-d62f-455a-b302-e8ce1e9646f6\") " pod="kube-system/kube-proxy-nb2dc" Nov 4 04:57:07.636127 kubelet[2827]: I1104 04:57:07.635464 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf6rk\" (UniqueName: \"kubernetes.io/projected/228baaa7-d62f-455a-b302-e8ce1e9646f6-kube-api-access-sf6rk\") pod \"kube-proxy-nb2dc\" (UID: \"228baaa7-d62f-455a-b302-e8ce1e9646f6\") " pod="kube-system/kube-proxy-nb2dc" Nov 4 04:57:07.735881 kubelet[2827]: I1104 04:57:07.735821 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk8zt\" (UniqueName: \"kubernetes.io/projected/1d8a1348-445b-413e-9c1e-a01e856c3eeb-kube-api-access-qk8zt\") pod \"tigera-operator-7dcd859c48-wzbjp\" (UID: \"1d8a1348-445b-413e-9c1e-a01e856c3eeb\") " pod="tigera-operator/tigera-operator-7dcd859c48-wzbjp" Nov 4 04:57:07.735881 kubelet[2827]: I1104 04:57:07.735863 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1d8a1348-445b-413e-9c1e-a01e856c3eeb-var-lib-calico\") pod \"tigera-operator-7dcd859c48-wzbjp\" (UID: \"1d8a1348-445b-413e-9c1e-a01e856c3eeb\") " pod="tigera-operator/tigera-operator-7dcd859c48-wzbjp" Nov 4 04:57:07.736132 systemd[1]: Created slice kubepods-besteffort-pod1d8a1348_445b_413e_9c1e_a01e856c3eeb.slice - libcontainer container kubepods-besteffort-pod1d8a1348_445b_413e_9c1e_a01e856c3eeb.slice. Nov 4 04:57:07.906153 kubelet[2827]: E1104 04:57:07.905943 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:07.906889 containerd[1635]: time="2025-11-04T04:57:07.906689321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nb2dc,Uid:228baaa7-d62f-455a-b302-e8ce1e9646f6,Namespace:kube-system,Attempt:0,}" Nov 4 04:57:08.045893 containerd[1635]: time="2025-11-04T04:57:08.045820152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wzbjp,Uid:1d8a1348-445b-413e-9c1e-a01e856c3eeb,Namespace:tigera-operator,Attempt:0,}" Nov 4 04:57:08.440290 kubelet[2827]: E1104 04:57:08.440249 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:09.325923 containerd[1635]: time="2025-11-04T04:57:09.325837847Z" level=info msg="connecting to shim 1729fb8d82741391483a0d25b958c97269ef845b37d4e71ae6020a19249c63b4" address="unix:///run/containerd/s/9633fee7e2466515fffe52ac4ac8e8c9d91f317a4ce0e6ca5656ba0b2bded746" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:57:09.327315 containerd[1635]: time="2025-11-04T04:57:09.327280936Z" level=info msg="connecting to shim c9f52f17a5583f4d59316c9acca3d1e68396cfb7c2b54a464a44631fd66a9a5b" address="unix:///run/containerd/s/55c2396a30c16296c0226754ee7e2640e9a79bb6ece542c9d3a4b6a7e17cfd1e" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:57:09.385351 systemd[1]: Started cri-containerd-1729fb8d82741391483a0d25b958c97269ef845b37d4e71ae6020a19249c63b4.scope - libcontainer container 1729fb8d82741391483a0d25b958c97269ef845b37d4e71ae6020a19249c63b4. Nov 4 04:57:09.388637 systemd[1]: Started cri-containerd-c9f52f17a5583f4d59316c9acca3d1e68396cfb7c2b54a464a44631fd66a9a5b.scope - libcontainer container c9f52f17a5583f4d59316c9acca3d1e68396cfb7c2b54a464a44631fd66a9a5b. Nov 4 04:57:09.455402 containerd[1635]: time="2025-11-04T04:57:09.455331700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nb2dc,Uid:228baaa7-d62f-455a-b302-e8ce1e9646f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9f52f17a5583f4d59316c9acca3d1e68396cfb7c2b54a464a44631fd66a9a5b\"" Nov 4 04:57:09.456106 kubelet[2827]: E1104 04:57:09.456078 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:09.457669 containerd[1635]: time="2025-11-04T04:57:09.457634006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wzbjp,Uid:1d8a1348-445b-413e-9c1e-a01e856c3eeb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1729fb8d82741391483a0d25b958c97269ef845b37d4e71ae6020a19249c63b4\"" Nov 4 04:57:09.458937 containerd[1635]: time="2025-11-04T04:57:09.458884635Z" level=info msg="CreateContainer within sandbox \"c9f52f17a5583f4d59316c9acca3d1e68396cfb7c2b54a464a44631fd66a9a5b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 04:57:09.459472 containerd[1635]: time="2025-11-04T04:57:09.459447508Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 04:57:09.488493 containerd[1635]: time="2025-11-04T04:57:09.486657680Z" level=info msg="Container fe912b91fa5091580882a11fc95d6ff9259861c75171c3138511526351ca0827: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:57:09.496964 containerd[1635]: time="2025-11-04T04:57:09.496886390Z" level=info msg="CreateContainer within sandbox \"c9f52f17a5583f4d59316c9acca3d1e68396cfb7c2b54a464a44631fd66a9a5b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fe912b91fa5091580882a11fc95d6ff9259861c75171c3138511526351ca0827\"" Nov 4 04:57:09.497662 containerd[1635]: time="2025-11-04T04:57:09.497625232Z" level=info msg="StartContainer for \"fe912b91fa5091580882a11fc95d6ff9259861c75171c3138511526351ca0827\"" Nov 4 04:57:09.499570 containerd[1635]: time="2025-11-04T04:57:09.499532159Z" level=info msg="connecting to shim fe912b91fa5091580882a11fc95d6ff9259861c75171c3138511526351ca0827" address="unix:///run/containerd/s/55c2396a30c16296c0226754ee7e2640e9a79bb6ece542c9d3a4b6a7e17cfd1e" protocol=ttrpc version=3 Nov 4 04:57:09.520262 systemd[1]: Started cri-containerd-fe912b91fa5091580882a11fc95d6ff9259861c75171c3138511526351ca0827.scope - libcontainer container fe912b91fa5091580882a11fc95d6ff9259861c75171c3138511526351ca0827. Nov 4 04:57:09.570391 containerd[1635]: time="2025-11-04T04:57:09.570345541Z" level=info msg="StartContainer for \"fe912b91fa5091580882a11fc95d6ff9259861c75171c3138511526351ca0827\" returns successfully" Nov 4 04:57:10.447529 kubelet[2827]: E1104 04:57:10.447487 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:11.369638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268120456.mount: Deactivated successfully. Nov 4 04:57:12.784644 containerd[1635]: time="2025-11-04T04:57:12.784552388Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:12.785487 containerd[1635]: time="2025-11-04T04:57:12.785416746Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23559564" Nov 4 04:57:12.786905 containerd[1635]: time="2025-11-04T04:57:12.786850317Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:12.789283 containerd[1635]: time="2025-11-04T04:57:12.789238617Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:12.789827 containerd[1635]: time="2025-11-04T04:57:12.789733512Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.330251861s" Nov 4 04:57:12.789827 containerd[1635]: time="2025-11-04T04:57:12.789794787Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 4 04:57:12.792222 containerd[1635]: time="2025-11-04T04:57:12.792174029Z" level=info msg="CreateContainer within sandbox \"1729fb8d82741391483a0d25b958c97269ef845b37d4e71ae6020a19249c63b4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 04:57:12.803444 containerd[1635]: time="2025-11-04T04:57:12.803386419Z" level=info msg="Container 1111a10cc909408d13860bda1ff42120eceb9993181e8f883d294a35a4e2b654: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:57:12.811584 containerd[1635]: time="2025-11-04T04:57:12.811527342Z" level=info msg="CreateContainer within sandbox \"1729fb8d82741391483a0d25b958c97269ef845b37d4e71ae6020a19249c63b4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1111a10cc909408d13860bda1ff42120eceb9993181e8f883d294a35a4e2b654\"" Nov 4 04:57:12.812500 containerd[1635]: time="2025-11-04T04:57:12.812467020Z" level=info msg="StartContainer for \"1111a10cc909408d13860bda1ff42120eceb9993181e8f883d294a35a4e2b654\"" Nov 4 04:57:12.813552 containerd[1635]: time="2025-11-04T04:57:12.813523036Z" level=info msg="connecting to shim 1111a10cc909408d13860bda1ff42120eceb9993181e8f883d294a35a4e2b654" address="unix:///run/containerd/s/9633fee7e2466515fffe52ac4ac8e8c9d91f317a4ce0e6ca5656ba0b2bded746" protocol=ttrpc version=3 Nov 4 04:57:12.840256 systemd[1]: Started cri-containerd-1111a10cc909408d13860bda1ff42120eceb9993181e8f883d294a35a4e2b654.scope - libcontainer container 1111a10cc909408d13860bda1ff42120eceb9993181e8f883d294a35a4e2b654. Nov 4 04:57:12.876869 containerd[1635]: time="2025-11-04T04:57:12.876560538Z" level=info msg="StartContainer for \"1111a10cc909408d13860bda1ff42120eceb9993181e8f883d294a35a4e2b654\" returns successfully" Nov 4 04:57:13.466014 kubelet[2827]: I1104 04:57:13.465919 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-wzbjp" podStartSLOduration=3.13415681 podStartE2EDuration="6.465895633s" podCreationTimestamp="2025-11-04 04:57:07 +0000 UTC" firstStartedPulling="2025-11-04 04:57:09.459029997 +0000 UTC m=+8.139877551" lastFinishedPulling="2025-11-04 04:57:12.79076882 +0000 UTC m=+11.471616374" observedRunningTime="2025-11-04 04:57:13.465282986 +0000 UTC m=+12.146130550" watchObservedRunningTime="2025-11-04 04:57:13.465895633 +0000 UTC m=+12.146743187" Nov 4 04:57:13.466635 kubelet[2827]: I1104 04:57:13.466208 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nb2dc" podStartSLOduration=6.466200975 podStartE2EDuration="6.466200975s" podCreationTimestamp="2025-11-04 04:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:57:10.491819923 +0000 UTC m=+9.172667477" watchObservedRunningTime="2025-11-04 04:57:13.466200975 +0000 UTC m=+12.147048529" Nov 4 04:57:13.745588 kubelet[2827]: E1104 04:57:13.745404 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:14.601586 kubelet[2827]: E1104 04:57:14.599478 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:20.140702 sudo[1841]: pam_unix(sudo:session): session closed for user root Nov 4 04:57:20.144609 sshd[1840]: Connection closed by 10.0.0.1 port 57890 Nov 4 04:57:20.147810 sshd-session[1837]: pam_unix(sshd:session): session closed for user core Nov 4 04:57:20.152481 systemd[1]: sshd@6-10.0.0.39:22-10.0.0.1:57890.service: Deactivated successfully. Nov 4 04:57:20.155017 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 04:57:20.155963 systemd[1]: session-7.scope: Consumed 5.495s CPU time, 220.4M memory peak. Nov 4 04:57:20.158919 systemd-logind[1608]: Session 7 logged out. Waiting for processes to exit. Nov 4 04:57:20.160960 systemd-logind[1608]: Removed session 7. Nov 4 04:57:25.043817 systemd[1]: Created slice kubepods-besteffort-pod220028f1_29f3_42c6_a599_2775bcec2f58.slice - libcontainer container kubepods-besteffort-pod220028f1_29f3_42c6_a599_2775bcec2f58.slice. Nov 4 04:57:25.047355 kubelet[2827]: I1104 04:57:25.047305 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/220028f1-29f3-42c6-a599-2775bcec2f58-typha-certs\") pod \"calico-typha-7bd69d67cb-ngv4g\" (UID: \"220028f1-29f3-42c6-a599-2775bcec2f58\") " pod="calico-system/calico-typha-7bd69d67cb-ngv4g" Nov 4 04:57:25.047698 kubelet[2827]: I1104 04:57:25.047358 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw6f5\" (UniqueName: \"kubernetes.io/projected/220028f1-29f3-42c6-a599-2775bcec2f58-kube-api-access-gw6f5\") pod \"calico-typha-7bd69d67cb-ngv4g\" (UID: \"220028f1-29f3-42c6-a599-2775bcec2f58\") " pod="calico-system/calico-typha-7bd69d67cb-ngv4g" Nov 4 04:57:25.047698 kubelet[2827]: I1104 04:57:25.047389 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/220028f1-29f3-42c6-a599-2775bcec2f58-tigera-ca-bundle\") pod \"calico-typha-7bd69d67cb-ngv4g\" (UID: \"220028f1-29f3-42c6-a599-2775bcec2f58\") " pod="calico-system/calico-typha-7bd69d67cb-ngv4g" Nov 4 04:57:25.220334 systemd[1]: Created slice kubepods-besteffort-podfa168705_9426_4aef_a6a3_3cb408bc19c4.slice - libcontainer container kubepods-besteffort-podfa168705_9426_4aef_a6a3_3cb408bc19c4.slice. Nov 4 04:57:25.249206 kubelet[2827]: I1104 04:57:25.249092 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fa168705-9426-4aef-a6a3-3cb408bc19c4-flexvol-driver-host\") pod \"calico-node-spdf4\" (UID: \"fa168705-9426-4aef-a6a3-3cb408bc19c4\") " pod="calico-system/calico-node-spdf4" Nov 4 04:57:25.249206 kubelet[2827]: I1104 04:57:25.249183 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa168705-9426-4aef-a6a3-3cb408bc19c4-xtables-lock\") pod \"calico-node-spdf4\" (UID: \"fa168705-9426-4aef-a6a3-3cb408bc19c4\") " pod="calico-system/calico-node-spdf4" Nov 4 04:57:25.249206 kubelet[2827]: I1104 04:57:25.249204 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa168705-9426-4aef-a6a3-3cb408bc19c4-lib-modules\") pod \"calico-node-spdf4\" (UID: \"fa168705-9426-4aef-a6a3-3cb408bc19c4\") " pod="calico-system/calico-node-spdf4" Nov 4 04:57:25.249206 kubelet[2827]: I1104 04:57:25.249221 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fa168705-9426-4aef-a6a3-3cb408bc19c4-policysync\") pod \"calico-node-spdf4\" (UID: \"fa168705-9426-4aef-a6a3-3cb408bc19c4\") " pod="calico-system/calico-node-spdf4" Nov 4 04:57:25.249490 kubelet[2827]: I1104 04:57:25.249237 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa168705-9426-4aef-a6a3-3cb408bc19c4-var-lib-calico\") pod \"calico-node-spdf4\" (UID: \"fa168705-9426-4aef-a6a3-3cb408bc19c4\") " pod="calico-system/calico-node-spdf4" Nov 4 04:57:25.249490 kubelet[2827]: I1104 04:57:25.249255 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fa168705-9426-4aef-a6a3-3cb408bc19c4-cni-bin-dir\") pod \"calico-node-spdf4\" (UID: \"fa168705-9426-4aef-a6a3-3cb408bc19c4\") " pod="calico-system/calico-node-spdf4" Nov 4 04:57:25.249490 kubelet[2827]: I1104 04:57:25.249269 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fa168705-9426-4aef-a6a3-3cb408bc19c4-cni-net-dir\") pod \"calico-node-spdf4\" (UID: \"fa168705-9426-4aef-a6a3-3cb408bc19c4\") " pod="calico-system/calico-node-spdf4" Nov 4 04:57:25.249490 kubelet[2827]: I1104 04:57:25.249285 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa168705-9426-4aef-a6a3-3cb408bc19c4-tigera-ca-bundle\") pod \"calico-node-spdf4\" (UID: \"fa168705-9426-4aef-a6a3-3cb408bc19c4\") " pod="calico-system/calico-node-spdf4" Nov 4 04:57:25.249490 kubelet[2827]: I1104 04:57:25.249337 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fa168705-9426-4aef-a6a3-3cb408bc19c4-cni-log-dir\") pod \"calico-node-spdf4\" (UID: \"fa168705-9426-4aef-a6a3-3cb408bc19c4\") " pod="calico-system/calico-node-spdf4" Nov 4 04:57:25.249615 kubelet[2827]: I1104 04:57:25.249358 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fa168705-9426-4aef-a6a3-3cb408bc19c4-var-run-calico\") pod \"calico-node-spdf4\" (UID: \"fa168705-9426-4aef-a6a3-3cb408bc19c4\") " pod="calico-system/calico-node-spdf4" Nov 4 04:57:25.249615 kubelet[2827]: I1104 04:57:25.249373 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x46mj\" (UniqueName: \"kubernetes.io/projected/fa168705-9426-4aef-a6a3-3cb408bc19c4-kube-api-access-x46mj\") pod \"calico-node-spdf4\" (UID: \"fa168705-9426-4aef-a6a3-3cb408bc19c4\") " pod="calico-system/calico-node-spdf4" Nov 4 04:57:25.249615 kubelet[2827]: I1104 04:57:25.249390 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fa168705-9426-4aef-a6a3-3cb408bc19c4-node-certs\") pod \"calico-node-spdf4\" (UID: \"fa168705-9426-4aef-a6a3-3cb408bc19c4\") " pod="calico-system/calico-node-spdf4" Nov 4 04:57:25.354814 kubelet[2827]: E1104 04:57:25.354636 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:25.356385 containerd[1635]: time="2025-11-04T04:57:25.356160821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bd69d67cb-ngv4g,Uid:220028f1-29f3-42c6-a599-2775bcec2f58,Namespace:calico-system,Attempt:0,}" Nov 4 04:57:25.364720 kubelet[2827]: E1104 04:57:25.364642 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.364720 kubelet[2827]: W1104 04:57:25.364698 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.364927 kubelet[2827]: E1104 04:57:25.364759 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.383191 kubelet[2827]: E1104 04:57:25.383060 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.383191 kubelet[2827]: W1104 04:57:25.383093 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.383191 kubelet[2827]: E1104 04:57:25.383136 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.399545 containerd[1635]: time="2025-11-04T04:57:25.398814384Z" level=info msg="connecting to shim e9d06dbceb189d797c6e64120633077432946aefa5abf43d0a1f9d80ef1bbcdc" address="unix:///run/containerd/s/dcc608bed5616d75644025f38a58546368aa046b225087e99ee11bff0c3721ae" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:57:25.424283 systemd[1]: Started cri-containerd-e9d06dbceb189d797c6e64120633077432946aefa5abf43d0a1f9d80ef1bbcdc.scope - libcontainer container e9d06dbceb189d797c6e64120633077432946aefa5abf43d0a1f9d80ef1bbcdc. Nov 4 04:57:25.481135 containerd[1635]: time="2025-11-04T04:57:25.481053566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bd69d67cb-ngv4g,Uid:220028f1-29f3-42c6-a599-2775bcec2f58,Namespace:calico-system,Attempt:0,} returns sandbox id \"e9d06dbceb189d797c6e64120633077432946aefa5abf43d0a1f9d80ef1bbcdc\"" Nov 4 04:57:25.483010 kubelet[2827]: E1104 04:57:25.482964 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:25.485342 containerd[1635]: time="2025-11-04T04:57:25.484653610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 04:57:25.503091 kubelet[2827]: E1104 04:57:25.503030 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4jsk" podUID="81cc34c3-6e55-409f-a691-f7248edc74db" Nov 4 04:57:25.525760 kubelet[2827]: E1104 04:57:25.525718 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:25.526808 containerd[1635]: time="2025-11-04T04:57:25.526304516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-spdf4,Uid:fa168705-9426-4aef-a6a3-3cb408bc19c4,Namespace:calico-system,Attempt:0,}" Nov 4 04:57:25.537227 kubelet[2827]: E1104 04:57:25.537172 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.537227 kubelet[2827]: W1104 04:57:25.537204 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.537227 kubelet[2827]: E1104 04:57:25.537232 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.537511 kubelet[2827]: E1104 04:57:25.537495 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.537511 kubelet[2827]: W1104 04:57:25.537508 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.537607 kubelet[2827]: E1104 04:57:25.537519 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.537742 kubelet[2827]: E1104 04:57:25.537726 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.537742 kubelet[2827]: W1104 04:57:25.537738 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.537840 kubelet[2827]: E1104 04:57:25.537751 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.542357 kubelet[2827]: E1104 04:57:25.542324 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.542357 kubelet[2827]: W1104 04:57:25.542339 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.542357 kubelet[2827]: E1104 04:57:25.542354 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.542581 kubelet[2827]: E1104 04:57:25.542561 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.542581 kubelet[2827]: W1104 04:57:25.542570 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.542581 kubelet[2827]: E1104 04:57:25.542580 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.542774 kubelet[2827]: E1104 04:57:25.542755 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.542774 kubelet[2827]: W1104 04:57:25.542766 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.542858 kubelet[2827]: E1104 04:57:25.542776 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.542963 kubelet[2827]: E1104 04:57:25.542944 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.542963 kubelet[2827]: W1104 04:57:25.542955 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.543028 kubelet[2827]: E1104 04:57:25.542966 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.543189 kubelet[2827]: E1104 04:57:25.543174 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.543189 kubelet[2827]: W1104 04:57:25.543186 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.543284 kubelet[2827]: E1104 04:57:25.543196 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.543409 kubelet[2827]: E1104 04:57:25.543394 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.543409 kubelet[2827]: W1104 04:57:25.543405 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.543466 kubelet[2827]: E1104 04:57:25.543415 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.543588 kubelet[2827]: E1104 04:57:25.543573 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.543588 kubelet[2827]: W1104 04:57:25.543583 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.543659 kubelet[2827]: E1104 04:57:25.543593 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.543777 kubelet[2827]: E1104 04:57:25.543759 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.543777 kubelet[2827]: W1104 04:57:25.543770 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.543857 kubelet[2827]: E1104 04:57:25.543779 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.543970 kubelet[2827]: E1104 04:57:25.543951 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.543970 kubelet[2827]: W1104 04:57:25.543963 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.543970 kubelet[2827]: E1104 04:57:25.543973 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.544196 kubelet[2827]: E1104 04:57:25.544173 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.544196 kubelet[2827]: W1104 04:57:25.544188 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.544278 kubelet[2827]: E1104 04:57:25.544201 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.544396 kubelet[2827]: E1104 04:57:25.544383 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.544450 kubelet[2827]: W1104 04:57:25.544397 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.544450 kubelet[2827]: E1104 04:57:25.544406 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.544611 kubelet[2827]: E1104 04:57:25.544590 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.544611 kubelet[2827]: W1104 04:57:25.544603 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.544674 kubelet[2827]: E1104 04:57:25.544615 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.544808 kubelet[2827]: E1104 04:57:25.544781 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.544808 kubelet[2827]: W1104 04:57:25.544802 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.544889 kubelet[2827]: E1104 04:57:25.544812 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.545068 kubelet[2827]: E1104 04:57:25.545052 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.545068 kubelet[2827]: W1104 04:57:25.545064 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.545163 kubelet[2827]: E1104 04:57:25.545075 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.545298 kubelet[2827]: E1104 04:57:25.545284 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.545298 kubelet[2827]: W1104 04:57:25.545293 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.545378 kubelet[2827]: E1104 04:57:25.545303 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.545488 kubelet[2827]: E1104 04:57:25.545473 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.545488 kubelet[2827]: W1104 04:57:25.545483 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.545551 kubelet[2827]: E1104 04:57:25.545492 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.545700 kubelet[2827]: E1104 04:57:25.545683 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.545700 kubelet[2827]: W1104 04:57:25.545694 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.545774 kubelet[2827]: E1104 04:57:25.545704 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.552274 kubelet[2827]: E1104 04:57:25.552232 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.552274 kubelet[2827]: W1104 04:57:25.552259 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.552441 kubelet[2827]: E1104 04:57:25.552292 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.552441 kubelet[2827]: I1104 04:57:25.552327 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjjtp\" (UniqueName: \"kubernetes.io/projected/81cc34c3-6e55-409f-a691-f7248edc74db-kube-api-access-mjjtp\") pod \"csi-node-driver-t4jsk\" (UID: \"81cc34c3-6e55-409f-a691-f7248edc74db\") " pod="calico-system/csi-node-driver-t4jsk" Nov 4 04:57:25.552659 kubelet[2827]: E1104 04:57:25.552600 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.552659 kubelet[2827]: W1104 04:57:25.552654 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.552720 kubelet[2827]: E1104 04:57:25.552683 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.552720 kubelet[2827]: I1104 04:57:25.552717 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/81cc34c3-6e55-409f-a691-f7248edc74db-socket-dir\") pod \"csi-node-driver-t4jsk\" (UID: \"81cc34c3-6e55-409f-a691-f7248edc74db\") " pod="calico-system/csi-node-driver-t4jsk" Nov 4 04:57:25.553076 kubelet[2827]: E1104 04:57:25.553049 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.553076 kubelet[2827]: W1104 04:57:25.553069 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.553369 kubelet[2827]: E1104 04:57:25.553092 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.553406 kubelet[2827]: I1104 04:57:25.553382 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81cc34c3-6e55-409f-a691-f7248edc74db-kubelet-dir\") pod \"csi-node-driver-t4jsk\" (UID: \"81cc34c3-6e55-409f-a691-f7248edc74db\") " pod="calico-system/csi-node-driver-t4jsk" Nov 4 04:57:25.553406 kubelet[2827]: E1104 04:57:25.553348 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.553451 kubelet[2827]: W1104 04:57:25.553408 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.553451 kubelet[2827]: E1104 04:57:25.553440 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.554030 kubelet[2827]: E1104 04:57:25.553905 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.554030 kubelet[2827]: W1104 04:57:25.553920 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.554030 kubelet[2827]: E1104 04:57:25.553940 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.554198 kubelet[2827]: E1104 04:57:25.554143 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.554198 kubelet[2827]: W1104 04:57:25.554193 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.554289 kubelet[2827]: E1104 04:57:25.554203 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.554402 kubelet[2827]: E1104 04:57:25.554384 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.554402 kubelet[2827]: W1104 04:57:25.554394 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.554491 kubelet[2827]: E1104 04:57:25.554419 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.554668 kubelet[2827]: E1104 04:57:25.554636 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.554668 kubelet[2827]: W1104 04:57:25.554659 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.554729 kubelet[2827]: E1104 04:57:25.554686 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.554729 kubelet[2827]: I1104 04:57:25.554717 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/81cc34c3-6e55-409f-a691-f7248edc74db-registration-dir\") pod \"csi-node-driver-t4jsk\" (UID: \"81cc34c3-6e55-409f-a691-f7248edc74db\") " pod="calico-system/csi-node-driver-t4jsk" Nov 4 04:57:25.555004 kubelet[2827]: E1104 04:57:25.554986 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.555004 kubelet[2827]: W1104 04:57:25.555000 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.555081 kubelet[2827]: E1104 04:57:25.555043 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.555157 kubelet[2827]: I1104 04:57:25.555079 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/81cc34c3-6e55-409f-a691-f7248edc74db-varrun\") pod \"csi-node-driver-t4jsk\" (UID: \"81cc34c3-6e55-409f-a691-f7248edc74db\") " pod="calico-system/csi-node-driver-t4jsk" Nov 4 04:57:25.555277 kubelet[2827]: E1104 04:57:25.555264 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.555277 kubelet[2827]: W1104 04:57:25.555275 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.555359 kubelet[2827]: E1104 04:57:25.555327 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.555504 kubelet[2827]: E1104 04:57:25.555491 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.555535 kubelet[2827]: W1104 04:57:25.555511 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.555585 kubelet[2827]: E1104 04:57:25.555532 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.555730 kubelet[2827]: E1104 04:57:25.555716 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.555730 kubelet[2827]: W1104 04:57:25.555727 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.555782 kubelet[2827]: E1104 04:57:25.555756 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.555943 kubelet[2827]: E1104 04:57:25.555931 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.555943 kubelet[2827]: W1104 04:57:25.555940 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.556000 kubelet[2827]: E1104 04:57:25.555948 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.556171 kubelet[2827]: E1104 04:57:25.556146 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.556171 kubelet[2827]: W1104 04:57:25.556160 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.556171 kubelet[2827]: E1104 04:57:25.556171 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.556291 containerd[1635]: time="2025-11-04T04:57:25.556255374Z" level=info msg="connecting to shim e8342b87bcc2ce6c83aa354a7ab0c3340fc93f49c0f44a233120d13b6bc4b6b6" address="unix:///run/containerd/s/f6facf13d9522dcd2e959b857f1d9d472e93098f3ce92e2474a5e8442b2c4434" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:57:25.556496 kubelet[2827]: E1104 04:57:25.556467 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.556496 kubelet[2827]: W1104 04:57:25.556481 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.556496 kubelet[2827]: E1104 04:57:25.556493 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.584477 systemd[1]: Started cri-containerd-e8342b87bcc2ce6c83aa354a7ab0c3340fc93f49c0f44a233120d13b6bc4b6b6.scope - libcontainer container e8342b87bcc2ce6c83aa354a7ab0c3340fc93f49c0f44a233120d13b6bc4b6b6. Nov 4 04:57:25.629603 containerd[1635]: time="2025-11-04T04:57:25.627674214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-spdf4,Uid:fa168705-9426-4aef-a6a3-3cb408bc19c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"e8342b87bcc2ce6c83aa354a7ab0c3340fc93f49c0f44a233120d13b6bc4b6b6\"" Nov 4 04:57:25.629880 kubelet[2827]: E1104 04:57:25.629230 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:25.656153 kubelet[2827]: E1104 04:57:25.656111 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.656153 kubelet[2827]: W1104 04:57:25.656139 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.656153 kubelet[2827]: E1104 04:57:25.656164 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.656448 kubelet[2827]: E1104 04:57:25.656430 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.656448 kubelet[2827]: W1104 04:57:25.656443 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.656528 kubelet[2827]: E1104 04:57:25.656457 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.656694 kubelet[2827]: E1104 04:57:25.656682 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.656694 kubelet[2827]: W1104 04:57:25.656692 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.656760 kubelet[2827]: E1104 04:57:25.656703 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.656915 kubelet[2827]: E1104 04:57:25.656898 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.656947 kubelet[2827]: W1104 04:57:25.656914 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.656947 kubelet[2827]: E1104 04:57:25.656934 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.657191 kubelet[2827]: E1104 04:57:25.657174 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.657191 kubelet[2827]: W1104 04:57:25.657189 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.657310 kubelet[2827]: E1104 04:57:25.657207 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.657444 kubelet[2827]: E1104 04:57:25.657429 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.657444 kubelet[2827]: W1104 04:57:25.657440 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.657550 kubelet[2827]: E1104 04:57:25.657529 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.657724 kubelet[2827]: E1104 04:57:25.657709 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.657760 kubelet[2827]: W1104 04:57:25.657723 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.657870 kubelet[2827]: E1104 04:57:25.657848 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.658039 kubelet[2827]: E1104 04:57:25.657972 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.658039 kubelet[2827]: W1104 04:57:25.657982 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.658249 kubelet[2827]: E1104 04:57:25.658075 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.658333 kubelet[2827]: E1104 04:57:25.658264 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.658333 kubelet[2827]: W1104 04:57:25.658274 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.658333 kubelet[2827]: E1104 04:57:25.658325 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.658559 kubelet[2827]: E1104 04:57:25.658480 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.658559 kubelet[2827]: W1104 04:57:25.658490 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.658559 kubelet[2827]: E1104 04:57:25.658545 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.658749 kubelet[2827]: E1104 04:57:25.658733 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.658749 kubelet[2827]: W1104 04:57:25.658745 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.658866 kubelet[2827]: E1104 04:57:25.658759 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.658964 kubelet[2827]: E1104 04:57:25.658950 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.658964 kubelet[2827]: W1104 04:57:25.658961 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.659051 kubelet[2827]: E1104 04:57:25.658973 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.659225 kubelet[2827]: E1104 04:57:25.659205 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.659225 kubelet[2827]: W1104 04:57:25.659220 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.659320 kubelet[2827]: E1104 04:57:25.659237 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.659426 kubelet[2827]: E1104 04:57:25.659411 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.659426 kubelet[2827]: W1104 04:57:25.659423 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.659474 kubelet[2827]: E1104 04:57:25.659451 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.659624 kubelet[2827]: E1104 04:57:25.659608 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.659624 kubelet[2827]: W1104 04:57:25.659620 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.659735 kubelet[2827]: E1104 04:57:25.659648 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.659857 kubelet[2827]: E1104 04:57:25.659818 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.659857 kubelet[2827]: W1104 04:57:25.659832 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.659909 kubelet[2827]: E1104 04:57:25.659858 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.660035 kubelet[2827]: E1104 04:57:25.660019 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.660035 kubelet[2827]: W1104 04:57:25.660033 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.660176 kubelet[2827]: E1104 04:57:25.660152 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.660330 kubelet[2827]: E1104 04:57:25.660314 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.660330 kubelet[2827]: W1104 04:57:25.660326 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.660405 kubelet[2827]: E1104 04:57:25.660371 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.660521 kubelet[2827]: E1104 04:57:25.660506 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.660521 kubelet[2827]: W1104 04:57:25.660518 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.660566 kubelet[2827]: E1104 04:57:25.660533 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.660717 kubelet[2827]: E1104 04:57:25.660705 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.660717 kubelet[2827]: W1104 04:57:25.660715 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.660769 kubelet[2827]: E1104 04:57:25.660728 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.660949 kubelet[2827]: E1104 04:57:25.660932 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.660949 kubelet[2827]: W1104 04:57:25.660943 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.661022 kubelet[2827]: E1104 04:57:25.660958 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.661154 kubelet[2827]: E1104 04:57:25.661140 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.661154 kubelet[2827]: W1104 04:57:25.661151 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.661205 kubelet[2827]: E1104 04:57:25.661164 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.661355 kubelet[2827]: E1104 04:57:25.661343 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.661355 kubelet[2827]: W1104 04:57:25.661353 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.661404 kubelet[2827]: E1104 04:57:25.661386 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.661525 kubelet[2827]: E1104 04:57:25.661512 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.661525 kubelet[2827]: W1104 04:57:25.661524 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.661570 kubelet[2827]: E1104 04:57:25.661534 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.662042 kubelet[2827]: E1104 04:57:25.662009 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.662042 kubelet[2827]: W1104 04:57:25.662022 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.662042 kubelet[2827]: E1104 04:57:25.662034 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:25.669230 kubelet[2827]: E1104 04:57:25.669203 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:25.669230 kubelet[2827]: W1104 04:57:25.669227 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:25.669337 kubelet[2827]: E1104 04:57:25.669245 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:27.413590 kubelet[2827]: E1104 04:57:27.413518 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4jsk" podUID="81cc34c3-6e55-409f-a691-f7248edc74db" Nov 4 04:57:29.039530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount256115341.mount: Deactivated successfully. Nov 4 04:57:29.413615 kubelet[2827]: E1104 04:57:29.413408 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4jsk" podUID="81cc34c3-6e55-409f-a691-f7248edc74db" Nov 4 04:57:30.205795 containerd[1635]: time="2025-11-04T04:57:30.205712709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:30.207991 containerd[1635]: time="2025-11-04T04:57:30.207946754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33736633" Nov 4 04:57:30.217043 containerd[1635]: time="2025-11-04T04:57:30.216983417Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:30.219443 containerd[1635]: time="2025-11-04T04:57:30.219376070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:30.220047 containerd[1635]: time="2025-11-04T04:57:30.220004368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.734742429s" Nov 4 04:57:30.220047 containerd[1635]: time="2025-11-04T04:57:30.220044484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 4 04:57:30.221149 containerd[1635]: time="2025-11-04T04:57:30.221124457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 04:57:30.233801 containerd[1635]: time="2025-11-04T04:57:30.232424130Z" level=info msg="CreateContainer within sandbox \"e9d06dbceb189d797c6e64120633077432946aefa5abf43d0a1f9d80ef1bbcdc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 04:57:30.242503 containerd[1635]: time="2025-11-04T04:57:30.242450537Z" level=info msg="Container 6de7922490f7e495d84f71b8f23ab675f7d7e4ccf3872df6a082775c208ab7f2: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:57:30.252851 containerd[1635]: time="2025-11-04T04:57:30.252787196Z" level=info msg="CreateContainer within sandbox \"e9d06dbceb189d797c6e64120633077432946aefa5abf43d0a1f9d80ef1bbcdc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6de7922490f7e495d84f71b8f23ab675f7d7e4ccf3872df6a082775c208ab7f2\"" Nov 4 04:57:30.253901 containerd[1635]: time="2025-11-04T04:57:30.253620718Z" level=info msg="StartContainer for \"6de7922490f7e495d84f71b8f23ab675f7d7e4ccf3872df6a082775c208ab7f2\"" Nov 4 04:57:30.255329 containerd[1635]: time="2025-11-04T04:57:30.255303842Z" level=info msg="connecting to shim 6de7922490f7e495d84f71b8f23ab675f7d7e4ccf3872df6a082775c208ab7f2" address="unix:///run/containerd/s/dcc608bed5616d75644025f38a58546368aa046b225087e99ee11bff0c3721ae" protocol=ttrpc version=3 Nov 4 04:57:30.280423 systemd[1]: Started cri-containerd-6de7922490f7e495d84f71b8f23ab675f7d7e4ccf3872df6a082775c208ab7f2.scope - libcontainer container 6de7922490f7e495d84f71b8f23ab675f7d7e4ccf3872df6a082775c208ab7f2. Nov 4 04:57:30.338422 containerd[1635]: time="2025-11-04T04:57:30.338366303Z" level=info msg="StartContainer for \"6de7922490f7e495d84f71b8f23ab675f7d7e4ccf3872df6a082775c208ab7f2\" returns successfully" Nov 4 04:57:30.498085 kubelet[2827]: E1104 04:57:30.498038 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:30.575848 kubelet[2827]: E1104 04:57:30.575773 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.575848 kubelet[2827]: W1104 04:57:30.575807 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.575848 kubelet[2827]: E1104 04:57:30.575833 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.576150 kubelet[2827]: E1104 04:57:30.576042 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.576150 kubelet[2827]: W1104 04:57:30.576053 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.576150 kubelet[2827]: E1104 04:57:30.576065 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.576341 kubelet[2827]: E1104 04:57:30.576261 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.576341 kubelet[2827]: W1104 04:57:30.576283 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.576341 kubelet[2827]: E1104 04:57:30.576296 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.576589 kubelet[2827]: E1104 04:57:30.576560 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.576589 kubelet[2827]: W1104 04:57:30.576577 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.576589 kubelet[2827]: E1104 04:57:30.576591 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.576879 kubelet[2827]: E1104 04:57:30.576858 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.576879 kubelet[2827]: W1104 04:57:30.576872 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.576958 kubelet[2827]: E1104 04:57:30.576883 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.577141 kubelet[2827]: E1104 04:57:30.577124 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.577141 kubelet[2827]: W1104 04:57:30.577136 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.577242 kubelet[2827]: E1104 04:57:30.577147 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.577360 kubelet[2827]: E1104 04:57:30.577341 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.577360 kubelet[2827]: W1104 04:57:30.577353 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.577435 kubelet[2827]: E1104 04:57:30.577363 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.577617 kubelet[2827]: E1104 04:57:30.577599 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.577617 kubelet[2827]: W1104 04:57:30.577611 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.577708 kubelet[2827]: E1104 04:57:30.577622 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.577850 kubelet[2827]: E1104 04:57:30.577831 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.577850 kubelet[2827]: W1104 04:57:30.577844 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.577937 kubelet[2827]: E1104 04:57:30.577855 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.578095 kubelet[2827]: E1104 04:57:30.578062 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.578095 kubelet[2827]: W1104 04:57:30.578078 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.578198 kubelet[2827]: E1104 04:57:30.578089 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.578340 kubelet[2827]: E1104 04:57:30.578322 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.578340 kubelet[2827]: W1104 04:57:30.578335 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.578415 kubelet[2827]: E1104 04:57:30.578347 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.578586 kubelet[2827]: E1104 04:57:30.578553 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.578586 kubelet[2827]: W1104 04:57:30.578564 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.578586 kubelet[2827]: E1104 04:57:30.578575 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.578817 kubelet[2827]: E1104 04:57:30.578799 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.578817 kubelet[2827]: W1104 04:57:30.578810 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.578903 kubelet[2827]: E1104 04:57:30.578821 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.579046 kubelet[2827]: E1104 04:57:30.579029 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.579046 kubelet[2827]: W1104 04:57:30.579040 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.579128 kubelet[2827]: E1104 04:57:30.579050 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.579296 kubelet[2827]: E1104 04:57:30.579278 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.579296 kubelet[2827]: W1104 04:57:30.579290 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.579370 kubelet[2827]: E1104 04:57:30.579301 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.588777 kubelet[2827]: E1104 04:57:30.588727 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.588777 kubelet[2827]: W1104 04:57:30.588750 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.588777 kubelet[2827]: E1104 04:57:30.588776 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.589002 kubelet[2827]: E1104 04:57:30.588977 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.589002 kubelet[2827]: W1104 04:57:30.588988 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.589002 kubelet[2827]: E1104 04:57:30.589001 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.589227 kubelet[2827]: E1104 04:57:30.589201 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.589227 kubelet[2827]: W1104 04:57:30.589213 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.589227 kubelet[2827]: E1104 04:57:30.589225 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.589434 kubelet[2827]: E1104 04:57:30.589403 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.589434 kubelet[2827]: W1104 04:57:30.589414 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.589434 kubelet[2827]: E1104 04:57:30.589426 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.589607 kubelet[2827]: E1104 04:57:30.589589 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.589607 kubelet[2827]: W1104 04:57:30.589599 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.589663 kubelet[2827]: E1104 04:57:30.589610 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.589802 kubelet[2827]: E1104 04:57:30.589777 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.589802 kubelet[2827]: W1104 04:57:30.589787 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.589802 kubelet[2827]: E1104 04:57:30.589799 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.589997 kubelet[2827]: E1104 04:57:30.589979 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.589997 kubelet[2827]: W1104 04:57:30.589990 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.590047 kubelet[2827]: E1104 04:57:30.590003 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.590389 kubelet[2827]: E1104 04:57:30.590336 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.590389 kubelet[2827]: W1104 04:57:30.590362 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.590488 kubelet[2827]: E1104 04:57:30.590400 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.590659 kubelet[2827]: E1104 04:57:30.590621 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.590659 kubelet[2827]: W1104 04:57:30.590638 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.590751 kubelet[2827]: E1104 04:57:30.590666 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.590876 kubelet[2827]: E1104 04:57:30.590853 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.590876 kubelet[2827]: W1104 04:57:30.590865 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.590954 kubelet[2827]: E1104 04:57:30.590893 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.591086 kubelet[2827]: E1104 04:57:30.591063 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.591086 kubelet[2827]: W1104 04:57:30.591075 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.591176 kubelet[2827]: E1104 04:57:30.591093 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.591355 kubelet[2827]: E1104 04:57:30.591337 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.591355 kubelet[2827]: W1104 04:57:30.591348 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.591433 kubelet[2827]: E1104 04:57:30.591365 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.591616 kubelet[2827]: E1104 04:57:30.591598 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.591616 kubelet[2827]: W1104 04:57:30.591609 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.591694 kubelet[2827]: E1104 04:57:30.591626 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.591888 kubelet[2827]: E1104 04:57:30.591865 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.591888 kubelet[2827]: W1104 04:57:30.591880 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.591888 kubelet[2827]: E1104 04:57:30.591891 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.592043 kubelet[2827]: E1104 04:57:30.592026 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.592043 kubelet[2827]: W1104 04:57:30.592037 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.592142 kubelet[2827]: E1104 04:57:30.592049 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.592274 kubelet[2827]: E1104 04:57:30.592255 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.592274 kubelet[2827]: W1104 04:57:30.592266 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.592342 kubelet[2827]: E1104 04:57:30.592280 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.592585 kubelet[2827]: E1104 04:57:30.592554 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.592585 kubelet[2827]: W1104 04:57:30.592570 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.592657 kubelet[2827]: E1104 04:57:30.592591 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:30.592827 kubelet[2827]: E1104 04:57:30.592808 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:30.592827 kubelet[2827]: W1104 04:57:30.592820 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:30.592898 kubelet[2827]: E1104 04:57:30.592831 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.414863 kubelet[2827]: E1104 04:57:31.414457 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4jsk" podUID="81cc34c3-6e55-409f-a691-f7248edc74db" Nov 4 04:57:31.499444 kubelet[2827]: I1104 04:57:31.499413 2827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 04:57:31.499863 kubelet[2827]: E1104 04:57:31.499714 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:31.554368 containerd[1635]: time="2025-11-04T04:57:31.554291133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:31.555223 containerd[1635]: time="2025-11-04T04:57:31.555190628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Nov 4 04:57:31.556732 containerd[1635]: time="2025-11-04T04:57:31.556664129Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:31.558949 containerd[1635]: time="2025-11-04T04:57:31.558911330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:31.559693 containerd[1635]: time="2025-11-04T04:57:31.559663709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.338506783s" Nov 4 04:57:31.559758 containerd[1635]: time="2025-11-04T04:57:31.559698024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 4 04:57:31.562401 containerd[1635]: time="2025-11-04T04:57:31.562357587Z" level=info msg="CreateContainer within sandbox \"e8342b87bcc2ce6c83aa354a7ab0c3340fc93f49c0f44a233120d13b6bc4b6b6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 04:57:31.577028 containerd[1635]: time="2025-11-04T04:57:31.576894388Z" level=info msg="Container 6e1e39f86ef78b9caa56782cfd5368815f12ef49f0ee14ee084597fa12432c90: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:57:31.586373 kubelet[2827]: E1104 04:57:31.586319 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.586373 kubelet[2827]: W1104 04:57:31.586360 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.586596 kubelet[2827]: E1104 04:57:31.586391 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.586730 kubelet[2827]: E1104 04:57:31.586698 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.586730 kubelet[2827]: W1104 04:57:31.586725 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.586793 kubelet[2827]: E1104 04:57:31.586737 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.587002 kubelet[2827]: E1104 04:57:31.586979 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.587002 kubelet[2827]: W1104 04:57:31.586994 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.587087 kubelet[2827]: E1104 04:57:31.587006 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.587446 kubelet[2827]: E1104 04:57:31.587413 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.587500 kubelet[2827]: W1104 04:57:31.587445 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.587500 kubelet[2827]: E1104 04:57:31.587484 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.588430 kubelet[2827]: E1104 04:57:31.588403 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.588430 kubelet[2827]: W1104 04:57:31.588427 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.588543 kubelet[2827]: E1104 04:57:31.588439 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.588824 kubelet[2827]: E1104 04:57:31.588788 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.588824 kubelet[2827]: W1104 04:57:31.588814 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.588942 kubelet[2827]: E1104 04:57:31.588837 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.589177 kubelet[2827]: E1104 04:57:31.589152 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.589177 kubelet[2827]: W1104 04:57:31.589171 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.589177 kubelet[2827]: E1104 04:57:31.589184 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.589489 kubelet[2827]: E1104 04:57:31.589465 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.589489 kubelet[2827]: W1104 04:57:31.589481 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.589625 kubelet[2827]: E1104 04:57:31.589496 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.589829 kubelet[2827]: E1104 04:57:31.589754 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.589829 kubelet[2827]: W1104 04:57:31.589766 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.589829 kubelet[2827]: E1104 04:57:31.589777 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.589963 containerd[1635]: time="2025-11-04T04:57:31.589920407Z" level=info msg="CreateContainer within sandbox \"e8342b87bcc2ce6c83aa354a7ab0c3340fc93f49c0f44a233120d13b6bc4b6b6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6e1e39f86ef78b9caa56782cfd5368815f12ef49f0ee14ee084597fa12432c90\"" Nov 4 04:57:31.590063 kubelet[2827]: E1104 04:57:31.590038 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.590063 kubelet[2827]: W1104 04:57:31.590057 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.590188 kubelet[2827]: E1104 04:57:31.590073 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.590366 kubelet[2827]: E1104 04:57:31.590340 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.590366 kubelet[2827]: W1104 04:57:31.590355 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.590366 kubelet[2827]: E1104 04:57:31.590366 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.590775 kubelet[2827]: E1104 04:57:31.590595 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.590775 kubelet[2827]: W1104 04:57:31.590614 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.590775 kubelet[2827]: E1104 04:57:31.590634 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.590904 containerd[1635]: time="2025-11-04T04:57:31.590642760Z" level=info msg="StartContainer for \"6e1e39f86ef78b9caa56782cfd5368815f12ef49f0ee14ee084597fa12432c90\"" Nov 4 04:57:31.591145 kubelet[2827]: E1104 04:57:31.591118 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.591145 kubelet[2827]: W1104 04:57:31.591135 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.591326 kubelet[2827]: E1104 04:57:31.591148 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.591406 kubelet[2827]: E1104 04:57:31.591376 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.591455 kubelet[2827]: W1104 04:57:31.591390 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.591455 kubelet[2827]: E1104 04:57:31.591425 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.591683 kubelet[2827]: E1104 04:57:31.591662 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.591765 kubelet[2827]: W1104 04:57:31.591681 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.591765 kubelet[2827]: E1104 04:57:31.591714 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.592674 containerd[1635]: time="2025-11-04T04:57:31.592622670Z" level=info msg="connecting to shim 6e1e39f86ef78b9caa56782cfd5368815f12ef49f0ee14ee084597fa12432c90" address="unix:///run/containerd/s/f6facf13d9522dcd2e959b857f1d9d472e93098f3ce92e2474a5e8442b2c4434" protocol=ttrpc version=3 Nov 4 04:57:31.596824 kubelet[2827]: E1104 04:57:31.596088 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.596824 kubelet[2827]: W1104 04:57:31.596194 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.596824 kubelet[2827]: E1104 04:57:31.596219 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.598106 kubelet[2827]: E1104 04:57:31.598062 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.598209 kubelet[2827]: W1104 04:57:31.598190 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.598443 kubelet[2827]: E1104 04:57:31.598279 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.598675 kubelet[2827]: E1104 04:57:31.598648 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.598800 kubelet[2827]: W1104 04:57:31.598785 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.598969 kubelet[2827]: E1104 04:57:31.598949 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.599368 kubelet[2827]: E1104 04:57:31.599343 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.599457 kubelet[2827]: W1104 04:57:31.599444 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.599636 kubelet[2827]: E1104 04:57:31.599589 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.599968 kubelet[2827]: E1104 04:57:31.599896 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.599968 kubelet[2827]: W1104 04:57:31.599910 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.599968 kubelet[2827]: E1104 04:57:31.599926 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.600413 kubelet[2827]: E1104 04:57:31.600347 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.600413 kubelet[2827]: W1104 04:57:31.600360 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.600413 kubelet[2827]: E1104 04:57:31.600379 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.600775 kubelet[2827]: E1104 04:57:31.600753 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.600775 kubelet[2827]: W1104 04:57:31.600766 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.601044 kubelet[2827]: E1104 04:57:31.600820 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.601128 kubelet[2827]: E1104 04:57:31.601075 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.601128 kubelet[2827]: W1104 04:57:31.601086 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.601227 kubelet[2827]: E1104 04:57:31.601184 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.601375 kubelet[2827]: E1104 04:57:31.601355 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.601375 kubelet[2827]: W1104 04:57:31.601368 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.601465 kubelet[2827]: E1104 04:57:31.601419 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.601678 kubelet[2827]: E1104 04:57:31.601613 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.601678 kubelet[2827]: W1104 04:57:31.601628 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.601678 kubelet[2827]: E1104 04:57:31.601650 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.604461 kubelet[2827]: E1104 04:57:31.604435 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.604461 kubelet[2827]: W1104 04:57:31.604452 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.604696 kubelet[2827]: E1104 04:57:31.604510 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.604852 kubelet[2827]: E1104 04:57:31.604832 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.604852 kubelet[2827]: W1104 04:57:31.604846 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.604947 kubelet[2827]: E1104 04:57:31.604905 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.605200 kubelet[2827]: E1104 04:57:31.605179 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.605200 kubelet[2827]: W1104 04:57:31.605200 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.605313 kubelet[2827]: E1104 04:57:31.605260 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.605505 kubelet[2827]: E1104 04:57:31.605479 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.605505 kubelet[2827]: W1104 04:57:31.605492 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.605793 kubelet[2827]: E1104 04:57:31.605633 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.605940 kubelet[2827]: E1104 04:57:31.605920 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.605940 kubelet[2827]: W1104 04:57:31.605933 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.606185 kubelet[2827]: E1104 04:57:31.606158 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.606838 kubelet[2827]: E1104 04:57:31.606754 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.606838 kubelet[2827]: W1104 04:57:31.606797 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.606838 kubelet[2827]: E1104 04:57:31.606821 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.607171 kubelet[2827]: E1104 04:57:31.607147 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.607171 kubelet[2827]: W1104 04:57:31.607164 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.607275 kubelet[2827]: E1104 04:57:31.607177 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.607721 kubelet[2827]: E1104 04:57:31.607689 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:57:31.607784 kubelet[2827]: W1104 04:57:31.607740 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:57:31.607784 kubelet[2827]: E1104 04:57:31.607756 2827 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:57:31.632470 systemd[1]: Started cri-containerd-6e1e39f86ef78b9caa56782cfd5368815f12ef49f0ee14ee084597fa12432c90.scope - libcontainer container 6e1e39f86ef78b9caa56782cfd5368815f12ef49f0ee14ee084597fa12432c90. Nov 4 04:57:31.710599 systemd[1]: cri-containerd-6e1e39f86ef78b9caa56782cfd5368815f12ef49f0ee14ee084597fa12432c90.scope: Deactivated successfully. Nov 4 04:57:31.711022 systemd[1]: cri-containerd-6e1e39f86ef78b9caa56782cfd5368815f12ef49f0ee14ee084597fa12432c90.scope: Consumed 51ms CPU time, 6.4M memory peak, 4.6M written to disk. Nov 4 04:57:31.711879 containerd[1635]: time="2025-11-04T04:57:31.711833264Z" level=info msg="received exit event container_id:\"6e1e39f86ef78b9caa56782cfd5368815f12ef49f0ee14ee084597fa12432c90\" id:\"6e1e39f86ef78b9caa56782cfd5368815f12ef49f0ee14ee084597fa12432c90\" pid:3544 exited_at:{seconds:1762232251 nanos:710417151}" Nov 4 04:57:31.714966 containerd[1635]: time="2025-11-04T04:57:31.714918516Z" level=info msg="StartContainer for \"6e1e39f86ef78b9caa56782cfd5368815f12ef49f0ee14ee084597fa12432c90\" returns successfully" Nov 4 04:57:31.742013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e1e39f86ef78b9caa56782cfd5368815f12ef49f0ee14ee084597fa12432c90-rootfs.mount: Deactivated successfully. Nov 4 04:57:32.506002 kubelet[2827]: E1104 04:57:32.505949 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:32.506602 containerd[1635]: time="2025-11-04T04:57:32.506573126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 04:57:32.524650 kubelet[2827]: I1104 04:57:32.524531 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bd69d67cb-ngv4g" podStartSLOduration=2.787674259 podStartE2EDuration="7.524506321s" podCreationTimestamp="2025-11-04 04:57:25 +0000 UTC" firstStartedPulling="2025-11-04 04:57:25.484072482 +0000 UTC m=+24.164920036" lastFinishedPulling="2025-11-04 04:57:30.220904544 +0000 UTC m=+28.901752098" observedRunningTime="2025-11-04 04:57:30.525277415 +0000 UTC m=+29.206124969" watchObservedRunningTime="2025-11-04 04:57:32.524506321 +0000 UTC m=+31.205353875" Nov 4 04:57:33.413141 kubelet[2827]: E1104 04:57:33.413008 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4jsk" podUID="81cc34c3-6e55-409f-a691-f7248edc74db" Nov 4 04:57:35.414218 kubelet[2827]: E1104 04:57:35.413780 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4jsk" podUID="81cc34c3-6e55-409f-a691-f7248edc74db" Nov 4 04:57:35.478920 containerd[1635]: time="2025-11-04T04:57:35.478870495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:35.479865 containerd[1635]: time="2025-11-04T04:57:35.479834031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 4 04:57:35.481326 containerd[1635]: time="2025-11-04T04:57:35.481283888Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:35.483765 containerd[1635]: time="2025-11-04T04:57:35.483686672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:35.484267 containerd[1635]: time="2025-11-04T04:57:35.484227745Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.977616457s" Nov 4 04:57:35.484267 containerd[1635]: time="2025-11-04T04:57:35.484264344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 4 04:57:35.486557 containerd[1635]: time="2025-11-04T04:57:35.486517236Z" level=info msg="CreateContainer within sandbox \"e8342b87bcc2ce6c83aa354a7ab0c3340fc93f49c0f44a233120d13b6bc4b6b6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 04:57:35.498736 containerd[1635]: time="2025-11-04T04:57:35.498636279Z" level=info msg="Container b84378c9fb723a54ba2b61672582c274110b55ee2bb9ca665a3d85ecbb0367be: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:57:35.515557 containerd[1635]: time="2025-11-04T04:57:35.515489394Z" level=info msg="CreateContainer within sandbox \"e8342b87bcc2ce6c83aa354a7ab0c3340fc93f49c0f44a233120d13b6bc4b6b6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b84378c9fb723a54ba2b61672582c274110b55ee2bb9ca665a3d85ecbb0367be\"" Nov 4 04:57:35.516202 containerd[1635]: time="2025-11-04T04:57:35.516152576Z" level=info msg="StartContainer for \"b84378c9fb723a54ba2b61672582c274110b55ee2bb9ca665a3d85ecbb0367be\"" Nov 4 04:57:35.517976 containerd[1635]: time="2025-11-04T04:57:35.517924808Z" level=info msg="connecting to shim b84378c9fb723a54ba2b61672582c274110b55ee2bb9ca665a3d85ecbb0367be" address="unix:///run/containerd/s/f6facf13d9522dcd2e959b857f1d9d472e93098f3ce92e2474a5e8442b2c4434" protocol=ttrpc version=3 Nov 4 04:57:35.551495 systemd[1]: Started cri-containerd-b84378c9fb723a54ba2b61672582c274110b55ee2bb9ca665a3d85ecbb0367be.scope - libcontainer container b84378c9fb723a54ba2b61672582c274110b55ee2bb9ca665a3d85ecbb0367be. Nov 4 04:57:35.607276 containerd[1635]: time="2025-11-04T04:57:35.607218125Z" level=info msg="StartContainer for \"b84378c9fb723a54ba2b61672582c274110b55ee2bb9ca665a3d85ecbb0367be\" returns successfully" Nov 4 04:57:36.516064 kubelet[2827]: E1104 04:57:36.516026 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:36.733478 systemd[1]: cri-containerd-b84378c9fb723a54ba2b61672582c274110b55ee2bb9ca665a3d85ecbb0367be.scope: Deactivated successfully. Nov 4 04:57:36.733935 systemd[1]: cri-containerd-b84378c9fb723a54ba2b61672582c274110b55ee2bb9ca665a3d85ecbb0367be.scope: Consumed 626ms CPU time, 176.3M memory peak, 3M read from disk, 171.3M written to disk. Nov 4 04:57:36.741023 containerd[1635]: time="2025-11-04T04:57:36.740983973Z" level=info msg="received exit event container_id:\"b84378c9fb723a54ba2b61672582c274110b55ee2bb9ca665a3d85ecbb0367be\" id:\"b84378c9fb723a54ba2b61672582c274110b55ee2bb9ca665a3d85ecbb0367be\" pid:3602 exited_at:{seconds:1762232256 nanos:734712629}" Nov 4 04:57:36.769056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b84378c9fb723a54ba2b61672582c274110b55ee2bb9ca665a3d85ecbb0367be-rootfs.mount: Deactivated successfully. Nov 4 04:57:36.808711 kubelet[2827]: I1104 04:57:36.808373 2827 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 04:57:37.048053 systemd[1]: Created slice kubepods-burstable-podd905a9a1_a257_4351_bcbd_500992ed04d4.slice - libcontainer container kubepods-burstable-podd905a9a1_a257_4351_bcbd_500992ed04d4.slice. Nov 4 04:57:37.064442 systemd[1]: Created slice kubepods-besteffort-pod9625c40e_9448_4035_9ad8_5bb2e0d153f9.slice - libcontainer container kubepods-besteffort-pod9625c40e_9448_4035_9ad8_5bb2e0d153f9.slice. Nov 4 04:57:37.069746 systemd[1]: Created slice kubepods-besteffort-pod5973885e_fb9f_4950_a4af_55889b504742.slice - libcontainer container kubepods-besteffort-pod5973885e_fb9f_4950_a4af_55889b504742.slice. Nov 4 04:57:37.077694 systemd[1]: Created slice kubepods-besteffort-pod12098cb7_6382_4a20_b151_e09bfda5e484.slice - libcontainer container kubepods-besteffort-pod12098cb7_6382_4a20_b151_e09bfda5e484.slice. Nov 4 04:57:37.083564 systemd[1]: Created slice kubepods-besteffort-podf971ab18_a5fd_481a_b739_b1338118165c.slice - libcontainer container kubepods-besteffort-podf971ab18_a5fd_481a_b739_b1338118165c.slice. Nov 4 04:57:37.089013 systemd[1]: Created slice kubepods-besteffort-pod7b46c654_6c31_424f_ab6b_6ce8350f8d0d.slice - libcontainer container kubepods-besteffort-pod7b46c654_6c31_424f_ab6b_6ce8350f8d0d.slice. Nov 4 04:57:37.096574 systemd[1]: Created slice kubepods-burstable-pod50772838_873f_4a8d_accc_2159be973082.slice - libcontainer container kubepods-burstable-pod50772838_873f_4a8d_accc_2159be973082.slice. Nov 4 04:57:37.133170 kubelet[2827]: I1104 04:57:37.133084 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d905a9a1-a257-4351-bcbd-500992ed04d4-config-volume\") pod \"coredns-668d6bf9bc-c5grg\" (UID: \"d905a9a1-a257-4351-bcbd-500992ed04d4\") " pod="kube-system/coredns-668d6bf9bc-c5grg" Nov 4 04:57:37.133170 kubelet[2827]: I1104 04:57:37.133150 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f971ab18-a5fd-481a-b739-b1338118165c-goldmane-ca-bundle\") pod \"goldmane-666569f655-lwhrn\" (UID: \"f971ab18-a5fd-481a-b739-b1338118165c\") " pod="calico-system/goldmane-666569f655-lwhrn" Nov 4 04:57:37.133170 kubelet[2827]: I1104 04:57:37.133169 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f971ab18-a5fd-481a-b739-b1338118165c-goldmane-key-pair\") pod \"goldmane-666569f655-lwhrn\" (UID: \"f971ab18-a5fd-481a-b739-b1338118165c\") " pod="calico-system/goldmane-666569f655-lwhrn" Nov 4 04:57:37.133170 kubelet[2827]: I1104 04:57:37.133184 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9tb6\" (UniqueName: \"kubernetes.io/projected/50772838-873f-4a8d-accc-2159be973082-kube-api-access-g9tb6\") pod \"coredns-668d6bf9bc-dnpmk\" (UID: \"50772838-873f-4a8d-accc-2159be973082\") " pod="kube-system/coredns-668d6bf9bc-dnpmk" Nov 4 04:57:37.133504 kubelet[2827]: I1104 04:57:37.133205 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9625c40e-9448-4035-9ad8-5bb2e0d153f9-whisker-ca-bundle\") pod \"whisker-c67578fc4-q559v\" (UID: \"9625c40e-9448-4035-9ad8-5bb2e0d153f9\") " pod="calico-system/whisker-c67578fc4-q559v" Nov 4 04:57:37.133504 kubelet[2827]: I1104 04:57:37.133221 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b46c654-6c31-424f-ab6b-6ce8350f8d0d-calico-apiserver-certs\") pod \"calico-apiserver-5bfb468d79-f8pbq\" (UID: \"7b46c654-6c31-424f-ab6b-6ce8350f8d0d\") " pod="calico-apiserver/calico-apiserver-5bfb468d79-f8pbq" Nov 4 04:57:37.133504 kubelet[2827]: I1104 04:57:37.133243 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f971ab18-a5fd-481a-b739-b1338118165c-config\") pod \"goldmane-666569f655-lwhrn\" (UID: \"f971ab18-a5fd-481a-b739-b1338118165c\") " pod="calico-system/goldmane-666569f655-lwhrn" Nov 4 04:57:37.133504 kubelet[2827]: I1104 04:57:37.133259 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnlqr\" (UniqueName: \"kubernetes.io/projected/7b46c654-6c31-424f-ab6b-6ce8350f8d0d-kube-api-access-lnlqr\") pod \"calico-apiserver-5bfb468d79-f8pbq\" (UID: \"7b46c654-6c31-424f-ab6b-6ce8350f8d0d\") " pod="calico-apiserver/calico-apiserver-5bfb468d79-f8pbq" Nov 4 04:57:37.133504 kubelet[2827]: I1104 04:57:37.133290 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7zlm\" (UniqueName: \"kubernetes.io/projected/5973885e-fb9f-4950-a4af-55889b504742-kube-api-access-h7zlm\") pod \"calico-apiserver-5bfb468d79-vxx5m\" (UID: \"5973885e-fb9f-4950-a4af-55889b504742\") " pod="calico-apiserver/calico-apiserver-5bfb468d79-vxx5m" Nov 4 04:57:37.133696 kubelet[2827]: I1104 04:57:37.133323 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntjpb\" (UniqueName: \"kubernetes.io/projected/f971ab18-a5fd-481a-b739-b1338118165c-kube-api-access-ntjpb\") pod \"goldmane-666569f655-lwhrn\" (UID: \"f971ab18-a5fd-481a-b739-b1338118165c\") " pod="calico-system/goldmane-666569f655-lwhrn" Nov 4 04:57:37.133696 kubelet[2827]: I1104 04:57:37.133339 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9625c40e-9448-4035-9ad8-5bb2e0d153f9-whisker-backend-key-pair\") pod \"whisker-c67578fc4-q559v\" (UID: \"9625c40e-9448-4035-9ad8-5bb2e0d153f9\") " pod="calico-system/whisker-c67578fc4-q559v" Nov 4 04:57:37.133696 kubelet[2827]: I1104 04:57:37.133409 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12098cb7-6382-4a20-b151-e09bfda5e484-tigera-ca-bundle\") pod \"calico-kube-controllers-56bb65b864-t4njh\" (UID: \"12098cb7-6382-4a20-b151-e09bfda5e484\") " pod="calico-system/calico-kube-controllers-56bb65b864-t4njh" Nov 4 04:57:37.133696 kubelet[2827]: I1104 04:57:37.133497 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4dx4\" (UniqueName: \"kubernetes.io/projected/12098cb7-6382-4a20-b151-e09bfda5e484-kube-api-access-z4dx4\") pod \"calico-kube-controllers-56bb65b864-t4njh\" (UID: \"12098cb7-6382-4a20-b151-e09bfda5e484\") " pod="calico-system/calico-kube-controllers-56bb65b864-t4njh" Nov 4 04:57:37.133696 kubelet[2827]: I1104 04:57:37.133559 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrdqz\" (UniqueName: \"kubernetes.io/projected/9625c40e-9448-4035-9ad8-5bb2e0d153f9-kube-api-access-rrdqz\") pod \"whisker-c67578fc4-q559v\" (UID: \"9625c40e-9448-4035-9ad8-5bb2e0d153f9\") " pod="calico-system/whisker-c67578fc4-q559v" Nov 4 04:57:37.133855 kubelet[2827]: I1104 04:57:37.133595 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5973885e-fb9f-4950-a4af-55889b504742-calico-apiserver-certs\") pod \"calico-apiserver-5bfb468d79-vxx5m\" (UID: \"5973885e-fb9f-4950-a4af-55889b504742\") " pod="calico-apiserver/calico-apiserver-5bfb468d79-vxx5m" Nov 4 04:57:37.133855 kubelet[2827]: I1104 04:57:37.133628 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50772838-873f-4a8d-accc-2159be973082-config-volume\") pod \"coredns-668d6bf9bc-dnpmk\" (UID: \"50772838-873f-4a8d-accc-2159be973082\") " pod="kube-system/coredns-668d6bf9bc-dnpmk" Nov 4 04:57:37.133855 kubelet[2827]: I1104 04:57:37.133652 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfkgc\" (UniqueName: \"kubernetes.io/projected/d905a9a1-a257-4351-bcbd-500992ed04d4-kube-api-access-gfkgc\") pod \"coredns-668d6bf9bc-c5grg\" (UID: \"d905a9a1-a257-4351-bcbd-500992ed04d4\") " pod="kube-system/coredns-668d6bf9bc-c5grg" Nov 4 04:57:37.359299 kubelet[2827]: E1104 04:57:37.359074 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:37.360113 containerd[1635]: time="2025-11-04T04:57:37.360054760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c5grg,Uid:d905a9a1-a257-4351-bcbd-500992ed04d4,Namespace:kube-system,Attempt:0,}" Nov 4 04:57:37.368758 containerd[1635]: time="2025-11-04T04:57:37.368680937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c67578fc4-q559v,Uid:9625c40e-9448-4035-9ad8-5bb2e0d153f9,Namespace:calico-system,Attempt:0,}" Nov 4 04:57:37.374228 containerd[1635]: time="2025-11-04T04:57:37.374162862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb468d79-vxx5m,Uid:5973885e-fb9f-4950-a4af-55889b504742,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:57:37.382955 containerd[1635]: time="2025-11-04T04:57:37.382138180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56bb65b864-t4njh,Uid:12098cb7-6382-4a20-b151-e09bfda5e484,Namespace:calico-system,Attempt:0,}" Nov 4 04:57:37.387237 containerd[1635]: time="2025-11-04T04:57:37.386513129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lwhrn,Uid:f971ab18-a5fd-481a-b739-b1338118165c,Namespace:calico-system,Attempt:0,}" Nov 4 04:57:37.392772 containerd[1635]: time="2025-11-04T04:57:37.392715032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb468d79-f8pbq,Uid:7b46c654-6c31-424f-ab6b-6ce8350f8d0d,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:57:37.399645 kubelet[2827]: E1104 04:57:37.399195 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:37.400007 containerd[1635]: time="2025-11-04T04:57:37.399970682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dnpmk,Uid:50772838-873f-4a8d-accc-2159be973082,Namespace:kube-system,Attempt:0,}" Nov 4 04:57:37.424558 systemd[1]: Created slice kubepods-besteffort-pod81cc34c3_6e55_409f_a691_f7248edc74db.slice - libcontainer container kubepods-besteffort-pod81cc34c3_6e55_409f_a691_f7248edc74db.slice. Nov 4 04:57:37.428928 containerd[1635]: time="2025-11-04T04:57:37.428848796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4jsk,Uid:81cc34c3-6e55-409f-a691-f7248edc74db,Namespace:calico-system,Attempt:0,}" Nov 4 04:57:37.555659 kubelet[2827]: E1104 04:57:37.554933 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:37.578164 containerd[1635]: time="2025-11-04T04:57:37.578079368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 04:57:37.655217 containerd[1635]: time="2025-11-04T04:57:37.654756218Z" level=error msg="Failed to destroy network for sandbox \"604136088a4f66dcbbd094ac3d4e216fa7902dbd87eaa46e1f5f640a80ecc412\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.660898 containerd[1635]: time="2025-11-04T04:57:37.660775510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c67578fc4-q559v,Uid:9625c40e-9448-4035-9ad8-5bb2e0d153f9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"604136088a4f66dcbbd094ac3d4e216fa7902dbd87eaa46e1f5f640a80ecc412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.663262 kubelet[2827]: E1104 04:57:37.661304 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604136088a4f66dcbbd094ac3d4e216fa7902dbd87eaa46e1f5f640a80ecc412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.663262 kubelet[2827]: E1104 04:57:37.661390 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604136088a4f66dcbbd094ac3d4e216fa7902dbd87eaa46e1f5f640a80ecc412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c67578fc4-q559v" Nov 4 04:57:37.663262 kubelet[2827]: E1104 04:57:37.661451 2827 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604136088a4f66dcbbd094ac3d4e216fa7902dbd87eaa46e1f5f640a80ecc412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c67578fc4-q559v" Nov 4 04:57:37.663453 kubelet[2827]: E1104 04:57:37.661496 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c67578fc4-q559v_calico-system(9625c40e-9448-4035-9ad8-5bb2e0d153f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c67578fc4-q559v_calico-system(9625c40e-9448-4035-9ad8-5bb2e0d153f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"604136088a4f66dcbbd094ac3d4e216fa7902dbd87eaa46e1f5f640a80ecc412\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c67578fc4-q559v" podUID="9625c40e-9448-4035-9ad8-5bb2e0d153f9" Nov 4 04:57:37.676558 containerd[1635]: time="2025-11-04T04:57:37.676479231Z" level=error msg="Failed to destroy network for sandbox \"0bbb371e0ba742b48cfacaaa656703811470a222622999458aa89cc6ca19a37e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.681342 containerd[1635]: time="2025-11-04T04:57:37.681262297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dnpmk,Uid:50772838-873f-4a8d-accc-2159be973082,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbb371e0ba742b48cfacaaa656703811470a222622999458aa89cc6ca19a37e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.681636 kubelet[2827]: E1104 04:57:37.681580 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbb371e0ba742b48cfacaaa656703811470a222622999458aa89cc6ca19a37e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.681697 kubelet[2827]: E1104 04:57:37.681664 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbb371e0ba742b48cfacaaa656703811470a222622999458aa89cc6ca19a37e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dnpmk" Nov 4 04:57:37.681697 kubelet[2827]: E1104 04:57:37.681689 2827 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbb371e0ba742b48cfacaaa656703811470a222622999458aa89cc6ca19a37e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dnpmk" Nov 4 04:57:37.681758 kubelet[2827]: E1104 04:57:37.681738 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dnpmk_kube-system(50772838-873f-4a8d-accc-2159be973082)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dnpmk_kube-system(50772838-873f-4a8d-accc-2159be973082)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bbb371e0ba742b48cfacaaa656703811470a222622999458aa89cc6ca19a37e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dnpmk" podUID="50772838-873f-4a8d-accc-2159be973082" Nov 4 04:57:37.686590 containerd[1635]: time="2025-11-04T04:57:37.686520312Z" level=error msg="Failed to destroy network for sandbox \"7ffd5f9ec22cb5e8f8fffcd1627814647a16bc34d18ad617da81cf8511ec55ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.690850 containerd[1635]: time="2025-11-04T04:57:37.690691448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c5grg,Uid:d905a9a1-a257-4351-bcbd-500992ed04d4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ffd5f9ec22cb5e8f8fffcd1627814647a16bc34d18ad617da81cf8511ec55ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.691039 kubelet[2827]: E1104 04:57:37.690992 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ffd5f9ec22cb5e8f8fffcd1627814647a16bc34d18ad617da81cf8511ec55ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.691129 kubelet[2827]: E1104 04:57:37.691065 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ffd5f9ec22cb5e8f8fffcd1627814647a16bc34d18ad617da81cf8511ec55ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-c5grg" Nov 4 04:57:37.691129 kubelet[2827]: E1104 04:57:37.691089 2827 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ffd5f9ec22cb5e8f8fffcd1627814647a16bc34d18ad617da81cf8511ec55ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-c5grg" Nov 4 04:57:37.691185 kubelet[2827]: E1104 04:57:37.691154 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c5grg_kube-system(d905a9a1-a257-4351-bcbd-500992ed04d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c5grg_kube-system(d905a9a1-a257-4351-bcbd-500992ed04d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ffd5f9ec22cb5e8f8fffcd1627814647a16bc34d18ad617da81cf8511ec55ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-c5grg" podUID="d905a9a1-a257-4351-bcbd-500992ed04d4" Nov 4 04:57:37.701488 containerd[1635]: time="2025-11-04T04:57:37.701421850Z" level=error msg="Failed to destroy network for sandbox \"27189850be118ce56ed33a23a0c6696a9b82d1b489748681fec2cffadb820b8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.702393 containerd[1635]: time="2025-11-04T04:57:37.702351682Z" level=error msg="Failed to destroy network for sandbox \"b25dea20a9ce63502005c82bbfba6279f2356de8cbc9b56dfbbf18596c7060e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.704734 containerd[1635]: time="2025-11-04T04:57:37.704573678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb468d79-f8pbq,Uid:7b46c654-6c31-424f-ab6b-6ce8350f8d0d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27189850be118ce56ed33a23a0c6696a9b82d1b489748681fec2cffadb820b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.705082 kubelet[2827]: E1104 04:57:37.705027 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27189850be118ce56ed33a23a0c6696a9b82d1b489748681fec2cffadb820b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.705198 kubelet[2827]: E1104 04:57:37.705168 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27189850be118ce56ed33a23a0c6696a9b82d1b489748681fec2cffadb820b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bfb468d79-f8pbq" Nov 4 04:57:37.705239 kubelet[2827]: E1104 04:57:37.705204 2827 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27189850be118ce56ed33a23a0c6696a9b82d1b489748681fec2cffadb820b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bfb468d79-f8pbq" Nov 4 04:57:37.705317 kubelet[2827]: E1104 04:57:37.705267 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bfb468d79-f8pbq_calico-apiserver(7b46c654-6c31-424f-ab6b-6ce8350f8d0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bfb468d79-f8pbq_calico-apiserver(7b46c654-6c31-424f-ab6b-6ce8350f8d0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27189850be118ce56ed33a23a0c6696a9b82d1b489748681fec2cffadb820b8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-f8pbq" podUID="7b46c654-6c31-424f-ab6b-6ce8350f8d0d" Nov 4 04:57:37.707696 containerd[1635]: time="2025-11-04T04:57:37.707636497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4jsk,Uid:81cc34c3-6e55-409f-a691-f7248edc74db,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b25dea20a9ce63502005c82bbfba6279f2356de8cbc9b56dfbbf18596c7060e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.707897 kubelet[2827]: E1104 04:57:37.707822 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b25dea20a9ce63502005c82bbfba6279f2356de8cbc9b56dfbbf18596c7060e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.707967 kubelet[2827]: E1104 04:57:37.707906 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b25dea20a9ce63502005c82bbfba6279f2356de8cbc9b56dfbbf18596c7060e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t4jsk" Nov 4 04:57:37.707967 kubelet[2827]: E1104 04:57:37.707931 2827 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b25dea20a9ce63502005c82bbfba6279f2356de8cbc9b56dfbbf18596c7060e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t4jsk" Nov 4 04:57:37.708048 kubelet[2827]: E1104 04:57:37.707979 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t4jsk_calico-system(81cc34c3-6e55-409f-a691-f7248edc74db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t4jsk_calico-system(81cc34c3-6e55-409f-a691-f7248edc74db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b25dea20a9ce63502005c82bbfba6279f2356de8cbc9b56dfbbf18596c7060e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t4jsk" podUID="81cc34c3-6e55-409f-a691-f7248edc74db" Nov 4 04:57:37.713398 containerd[1635]: time="2025-11-04T04:57:37.713337142Z" level=error msg="Failed to destroy network for sandbox \"23a4407634d0b079e45b02c968bc3b8c4036d7921dbc2c698dafafaae188e2e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.716071 containerd[1635]: time="2025-11-04T04:57:37.716005193Z" level=error msg="Failed to destroy network for sandbox \"ae007fb3e09e146e4870671a38fea1f038334a6bb36daf643b69c6be1a1229c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.718343 containerd[1635]: time="2025-11-04T04:57:37.718276359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb468d79-vxx5m,Uid:5973885e-fb9f-4950-a4af-55889b504742,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23a4407634d0b079e45b02c968bc3b8c4036d7921dbc2c698dafafaae188e2e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.718668 kubelet[2827]: E1104 04:57:37.718588 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23a4407634d0b079e45b02c968bc3b8c4036d7921dbc2c698dafafaae188e2e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.718757 kubelet[2827]: E1104 04:57:37.718673 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23a4407634d0b079e45b02c968bc3b8c4036d7921dbc2c698dafafaae188e2e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bfb468d79-vxx5m" Nov 4 04:57:37.718757 kubelet[2827]: E1104 04:57:37.718696 2827 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23a4407634d0b079e45b02c968bc3b8c4036d7921dbc2c698dafafaae188e2e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bfb468d79-vxx5m" Nov 4 04:57:37.718757 kubelet[2827]: E1104 04:57:37.718735 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bfb468d79-vxx5m_calico-apiserver(5973885e-fb9f-4950-a4af-55889b504742)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bfb468d79-vxx5m_calico-apiserver(5973885e-fb9f-4950-a4af-55889b504742)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23a4407634d0b079e45b02c968bc3b8c4036d7921dbc2c698dafafaae188e2e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-vxx5m" podUID="5973885e-fb9f-4950-a4af-55889b504742" Nov 4 04:57:37.721195 containerd[1635]: time="2025-11-04T04:57:37.721152410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lwhrn,Uid:f971ab18-a5fd-481a-b739-b1338118165c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae007fb3e09e146e4870671a38fea1f038334a6bb36daf643b69c6be1a1229c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.722057 kubelet[2827]: E1104 04:57:37.722018 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae007fb3e09e146e4870671a38fea1f038334a6bb36daf643b69c6be1a1229c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.722142 kubelet[2827]: E1104 04:57:37.722072 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae007fb3e09e146e4870671a38fea1f038334a6bb36daf643b69c6be1a1229c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lwhrn" Nov 4 04:57:37.722142 kubelet[2827]: E1104 04:57:37.722121 2827 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae007fb3e09e146e4870671a38fea1f038334a6bb36daf643b69c6be1a1229c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lwhrn" Nov 4 04:57:37.722215 kubelet[2827]: E1104 04:57:37.722165 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-lwhrn_calico-system(f971ab18-a5fd-481a-b739-b1338118165c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-lwhrn_calico-system(f971ab18-a5fd-481a-b739-b1338118165c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae007fb3e09e146e4870671a38fea1f038334a6bb36daf643b69c6be1a1229c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-lwhrn" podUID="f971ab18-a5fd-481a-b739-b1338118165c" Nov 4 04:57:37.730499 containerd[1635]: time="2025-11-04T04:57:37.730420631Z" level=error msg="Failed to destroy network for sandbox \"99ef5948df2e2d4d660ed76da5843be4f0a0fddfdf7a4bf8c7116923f5ea25ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.733030 containerd[1635]: time="2025-11-04T04:57:37.732998061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56bb65b864-t4njh,Uid:12098cb7-6382-4a20-b151-e09bfda5e484,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ef5948df2e2d4d660ed76da5843be4f0a0fddfdf7a4bf8c7116923f5ea25ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.733389 kubelet[2827]: E1104 04:57:37.733356 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ef5948df2e2d4d660ed76da5843be4f0a0fddfdf7a4bf8c7116923f5ea25ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:57:37.733450 kubelet[2827]: E1104 04:57:37.733402 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ef5948df2e2d4d660ed76da5843be4f0a0fddfdf7a4bf8c7116923f5ea25ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56bb65b864-t4njh" Nov 4 04:57:37.733450 kubelet[2827]: E1104 04:57:37.733422 2827 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ef5948df2e2d4d660ed76da5843be4f0a0fddfdf7a4bf8c7116923f5ea25ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56bb65b864-t4njh" Nov 4 04:57:37.733527 kubelet[2827]: E1104 04:57:37.733461 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56bb65b864-t4njh_calico-system(12098cb7-6382-4a20-b151-e09bfda5e484)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56bb65b864-t4njh_calico-system(12098cb7-6382-4a20-b151-e09bfda5e484)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99ef5948df2e2d4d660ed76da5843be4f0a0fddfdf7a4bf8c7116923f5ea25ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56bb65b864-t4njh" podUID="12098cb7-6382-4a20-b151-e09bfda5e484" Nov 4 04:57:43.238431 kubelet[2827]: I1104 04:57:43.238366 2827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 04:57:43.240668 kubelet[2827]: E1104 04:57:43.239220 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:43.568668 kubelet[2827]: E1104 04:57:43.568627 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:45.226753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3207854442.mount: Deactivated successfully. Nov 4 04:57:46.598654 containerd[1635]: time="2025-11-04T04:57:46.598564170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:46.600284 containerd[1635]: time="2025-11-04T04:57:46.600248745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 4 04:57:46.602533 containerd[1635]: time="2025-11-04T04:57:46.602495508Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:46.605211 containerd[1635]: time="2025-11-04T04:57:46.605166489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:57:46.605783 containerd[1635]: time="2025-11-04T04:57:46.605735803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.020121261s" Nov 4 04:57:46.605783 containerd[1635]: time="2025-11-04T04:57:46.605770679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 4 04:57:46.629386 containerd[1635]: time="2025-11-04T04:57:46.629329292Z" level=info msg="CreateContainer within sandbox \"e8342b87bcc2ce6c83aa354a7ab0c3340fc93f49c0f44a233120d13b6bc4b6b6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 04:57:46.650661 containerd[1635]: time="2025-11-04T04:57:46.650581109Z" level=info msg="Container edd79ab508479b07e9d7397784b505df44e59bc2fe881eae2f82a572fc6b9c38: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:57:46.686133 containerd[1635]: time="2025-11-04T04:57:46.686045910Z" level=info msg="CreateContainer within sandbox \"e8342b87bcc2ce6c83aa354a7ab0c3340fc93f49c0f44a233120d13b6bc4b6b6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"edd79ab508479b07e9d7397784b505df44e59bc2fe881eae2f82a572fc6b9c38\"" Nov 4 04:57:46.686719 containerd[1635]: time="2025-11-04T04:57:46.686693161Z" level=info msg="StartContainer for \"edd79ab508479b07e9d7397784b505df44e59bc2fe881eae2f82a572fc6b9c38\"" Nov 4 04:57:46.688446 containerd[1635]: time="2025-11-04T04:57:46.688415587Z" level=info msg="connecting to shim edd79ab508479b07e9d7397784b505df44e59bc2fe881eae2f82a572fc6b9c38" address="unix:///run/containerd/s/f6facf13d9522dcd2e959b857f1d9d472e93098f3ce92e2474a5e8442b2c4434" protocol=ttrpc version=3 Nov 4 04:57:46.797296 systemd[1]: Started cri-containerd-edd79ab508479b07e9d7397784b505df44e59bc2fe881eae2f82a572fc6b9c38.scope - libcontainer container edd79ab508479b07e9d7397784b505df44e59bc2fe881eae2f82a572fc6b9c38. Nov 4 04:57:47.020354 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 04:57:47.022828 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 04:57:47.332327 containerd[1635]: time="2025-11-04T04:57:47.332176012Z" level=info msg="StartContainer for \"edd79ab508479b07e9d7397784b505df44e59bc2fe881eae2f82a572fc6b9c38\" returns successfully" Nov 4 04:57:47.587223 kubelet[2827]: E1104 04:57:47.587041 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:47.609899 kubelet[2827]: I1104 04:57:47.609816 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-spdf4" podStartSLOduration=1.633657693 podStartE2EDuration="22.609777694s" podCreationTimestamp="2025-11-04 04:57:25 +0000 UTC" firstStartedPulling="2025-11-04 04:57:25.630312788 +0000 UTC m=+24.311160352" lastFinishedPulling="2025-11-04 04:57:46.606432799 +0000 UTC m=+45.287280353" observedRunningTime="2025-11-04 04:57:47.608720393 +0000 UTC m=+46.289567967" watchObservedRunningTime="2025-11-04 04:57:47.609777694 +0000 UTC m=+46.290625248" Nov 4 04:57:47.727194 kubelet[2827]: I1104 04:57:47.726539 2827 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrdqz\" (UniqueName: \"kubernetes.io/projected/9625c40e-9448-4035-9ad8-5bb2e0d153f9-kube-api-access-rrdqz\") pod \"9625c40e-9448-4035-9ad8-5bb2e0d153f9\" (UID: \"9625c40e-9448-4035-9ad8-5bb2e0d153f9\") " Nov 4 04:57:47.727194 kubelet[2827]: I1104 04:57:47.726596 2827 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9625c40e-9448-4035-9ad8-5bb2e0d153f9-whisker-ca-bundle\") pod \"9625c40e-9448-4035-9ad8-5bb2e0d153f9\" (UID: \"9625c40e-9448-4035-9ad8-5bb2e0d153f9\") " Nov 4 04:57:47.727194 kubelet[2827]: I1104 04:57:47.726633 2827 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9625c40e-9448-4035-9ad8-5bb2e0d153f9-whisker-backend-key-pair\") pod \"9625c40e-9448-4035-9ad8-5bb2e0d153f9\" (UID: \"9625c40e-9448-4035-9ad8-5bb2e0d153f9\") " Nov 4 04:57:47.735534 systemd[1]: var-lib-kubelet-pods-9625c40e\x2d9448\x2d4035\x2d9ad8\x2d5bb2e0d153f9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 04:57:47.741365 kubelet[2827]: I1104 04:57:47.741272 2827 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9625c40e-9448-4035-9ad8-5bb2e0d153f9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9625c40e-9448-4035-9ad8-5bb2e0d153f9" (UID: "9625c40e-9448-4035-9ad8-5bb2e0d153f9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 04:57:47.745874 kubelet[2827]: I1104 04:57:47.744305 2827 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9625c40e-9448-4035-9ad8-5bb2e0d153f9-kube-api-access-rrdqz" (OuterVolumeSpecName: "kube-api-access-rrdqz") pod "9625c40e-9448-4035-9ad8-5bb2e0d153f9" (UID: "9625c40e-9448-4035-9ad8-5bb2e0d153f9"). InnerVolumeSpecName "kube-api-access-rrdqz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 04:57:47.749158 kubelet[2827]: I1104 04:57:47.747221 2827 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9625c40e-9448-4035-9ad8-5bb2e0d153f9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9625c40e-9448-4035-9ad8-5bb2e0d153f9" (UID: "9625c40e-9448-4035-9ad8-5bb2e0d153f9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 04:57:47.747419 systemd[1]: var-lib-kubelet-pods-9625c40e\x2d9448\x2d4035\x2d9ad8\x2d5bb2e0d153f9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drrdqz.mount: Deactivated successfully. Nov 4 04:57:47.827762 kubelet[2827]: I1104 04:57:47.827688 2827 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9625c40e-9448-4035-9ad8-5bb2e0d153f9-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 4 04:57:47.827762 kubelet[2827]: I1104 04:57:47.827727 2827 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rrdqz\" (UniqueName: \"kubernetes.io/projected/9625c40e-9448-4035-9ad8-5bb2e0d153f9-kube-api-access-rrdqz\") on node \"localhost\" DevicePath \"\"" Nov 4 04:57:47.827762 kubelet[2827]: I1104 04:57:47.827736 2827 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9625c40e-9448-4035-9ad8-5bb2e0d153f9-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 4 04:57:48.588718 kubelet[2827]: E1104 04:57:48.588085 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:48.595153 systemd[1]: Removed slice kubepods-besteffort-pod9625c40e_9448_4035_9ad8_5bb2e0d153f9.slice - libcontainer container kubepods-besteffort-pod9625c40e_9448_4035_9ad8_5bb2e0d153f9.slice. Nov 4 04:57:48.653430 systemd[1]: Created slice kubepods-besteffort-pod14be4876_4542_4022_8773_f8e166b995c8.slice - libcontainer container kubepods-besteffort-pod14be4876_4542_4022_8773_f8e166b995c8.slice. Nov 4 04:57:48.733997 kubelet[2827]: I1104 04:57:48.733930 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/14be4876-4542-4022-8773-f8e166b995c8-whisker-backend-key-pair\") pod \"whisker-5879fbbc68-54k5g\" (UID: \"14be4876-4542-4022-8773-f8e166b995c8\") " pod="calico-system/whisker-5879fbbc68-54k5g" Nov 4 04:57:48.733997 kubelet[2827]: I1104 04:57:48.733985 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14be4876-4542-4022-8773-f8e166b995c8-whisker-ca-bundle\") pod \"whisker-5879fbbc68-54k5g\" (UID: \"14be4876-4542-4022-8773-f8e166b995c8\") " pod="calico-system/whisker-5879fbbc68-54k5g" Nov 4 04:57:48.733997 kubelet[2827]: I1104 04:57:48.734004 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvbmr\" (UniqueName: \"kubernetes.io/projected/14be4876-4542-4022-8773-f8e166b995c8-kube-api-access-qvbmr\") pod \"whisker-5879fbbc68-54k5g\" (UID: \"14be4876-4542-4022-8773-f8e166b995c8\") " pod="calico-system/whisker-5879fbbc68-54k5g" Nov 4 04:57:48.958288 containerd[1635]: time="2025-11-04T04:57:48.958155337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5879fbbc68-54k5g,Uid:14be4876-4542-4022-8773-f8e166b995c8,Namespace:calico-system,Attempt:0,}" Nov 4 04:57:49.169566 systemd-networkd[1531]: cali28c4ee8a234: Link UP Nov 4 04:57:49.169778 systemd-networkd[1531]: cali28c4ee8a234: Gained carrier Nov 4 04:57:49.183945 containerd[1635]: 2025-11-04 04:57:49.013 [INFO][4136] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 04:57:49.183945 containerd[1635]: 2025-11-04 04:57:49.035 [INFO][4136] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5879fbbc68--54k5g-eth0 whisker-5879fbbc68- calico-system 14be4876-4542-4022-8773-f8e166b995c8 913 0 2025-11-04 04:57:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5879fbbc68 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5879fbbc68-54k5g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali28c4ee8a234 [] [] }} ContainerID="7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" Namespace="calico-system" Pod="whisker-5879fbbc68-54k5g" WorkloadEndpoint="localhost-k8s-whisker--5879fbbc68--54k5g-" Nov 4 04:57:49.183945 containerd[1635]: 2025-11-04 04:57:49.035 [INFO][4136] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" Namespace="calico-system" Pod="whisker-5879fbbc68-54k5g" WorkloadEndpoint="localhost-k8s-whisker--5879fbbc68--54k5g-eth0" Nov 4 04:57:49.183945 containerd[1635]: 2025-11-04 04:57:49.121 [INFO][4153] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" HandleID="k8s-pod-network.7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" Workload="localhost-k8s-whisker--5879fbbc68--54k5g-eth0" Nov 4 04:57:49.184247 containerd[1635]: 2025-11-04 04:57:49.122 [INFO][4153] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" HandleID="k8s-pod-network.7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" Workload="localhost-k8s-whisker--5879fbbc68--54k5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019f410), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5879fbbc68-54k5g", "timestamp":"2025-11-04 04:57:49.121745384 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:57:49.184247 containerd[1635]: 2025-11-04 04:57:49.122 [INFO][4153] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:57:49.184247 containerd[1635]: 2025-11-04 04:57:49.123 [INFO][4153] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:57:49.184247 containerd[1635]: 2025-11-04 04:57:49.123 [INFO][4153] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:57:49.184247 containerd[1635]: 2025-11-04 04:57:49.133 [INFO][4153] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" host="localhost" Nov 4 04:57:49.184247 containerd[1635]: 2025-11-04 04:57:49.138 [INFO][4153] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:57:49.184247 containerd[1635]: 2025-11-04 04:57:49.142 [INFO][4153] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:57:49.184247 containerd[1635]: 2025-11-04 04:57:49.144 [INFO][4153] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:49.184247 containerd[1635]: 2025-11-04 04:57:49.146 [INFO][4153] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:49.184247 containerd[1635]: 2025-11-04 04:57:49.146 [INFO][4153] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" host="localhost" Nov 4 04:57:49.184484 containerd[1635]: 2025-11-04 04:57:49.147 [INFO][4153] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8 Nov 4 04:57:49.184484 containerd[1635]: 2025-11-04 04:57:49.153 [INFO][4153] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" host="localhost" Nov 4 04:57:49.184484 containerd[1635]: 2025-11-04 04:57:49.158 [INFO][4153] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" host="localhost" Nov 4 04:57:49.184484 containerd[1635]: 2025-11-04 04:57:49.158 [INFO][4153] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" host="localhost" Nov 4 04:57:49.184484 containerd[1635]: 2025-11-04 04:57:49.158 [INFO][4153] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:57:49.184484 containerd[1635]: 2025-11-04 04:57:49.158 [INFO][4153] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" HandleID="k8s-pod-network.7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" Workload="localhost-k8s-whisker--5879fbbc68--54k5g-eth0" Nov 4 04:57:49.184604 containerd[1635]: 2025-11-04 04:57:49.162 [INFO][4136] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" Namespace="calico-system" Pod="whisker-5879fbbc68-54k5g" WorkloadEndpoint="localhost-k8s-whisker--5879fbbc68--54k5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5879fbbc68--54k5g-eth0", GenerateName:"whisker-5879fbbc68-", Namespace:"calico-system", SelfLink:"", UID:"14be4876-4542-4022-8773-f8e166b995c8", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5879fbbc68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5879fbbc68-54k5g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali28c4ee8a234", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:49.184604 containerd[1635]: 2025-11-04 04:57:49.162 [INFO][4136] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" Namespace="calico-system" Pod="whisker-5879fbbc68-54k5g" WorkloadEndpoint="localhost-k8s-whisker--5879fbbc68--54k5g-eth0" Nov 4 04:57:49.184696 containerd[1635]: 2025-11-04 04:57:49.162 [INFO][4136] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28c4ee8a234 ContainerID="7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" Namespace="calico-system" Pod="whisker-5879fbbc68-54k5g" WorkloadEndpoint="localhost-k8s-whisker--5879fbbc68--54k5g-eth0" Nov 4 04:57:49.184696 containerd[1635]: 2025-11-04 04:57:49.169 [INFO][4136] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" Namespace="calico-system" Pod="whisker-5879fbbc68-54k5g" WorkloadEndpoint="localhost-k8s-whisker--5879fbbc68--54k5g-eth0" Nov 4 04:57:49.184740 containerd[1635]: 2025-11-04 04:57:49.169 [INFO][4136] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" Namespace="calico-system" Pod="whisker-5879fbbc68-54k5g" WorkloadEndpoint="localhost-k8s-whisker--5879fbbc68--54k5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5879fbbc68--54k5g-eth0", GenerateName:"whisker-5879fbbc68-", Namespace:"calico-system", SelfLink:"", UID:"14be4876-4542-4022-8773-f8e166b995c8", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5879fbbc68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8", Pod:"whisker-5879fbbc68-54k5g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali28c4ee8a234", MAC:"42:18:bf:86:ca:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:49.184789 containerd[1635]: 2025-11-04 04:57:49.180 [INFO][4136] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" Namespace="calico-system" Pod="whisker-5879fbbc68-54k5g" WorkloadEndpoint="localhost-k8s-whisker--5879fbbc68--54k5g-eth0" Nov 4 04:57:49.268841 containerd[1635]: time="2025-11-04T04:57:49.268761271Z" level=info msg="connecting to shim 7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8" address="unix:///run/containerd/s/7e18624303dc053fffe3351e9c3deaea6a9b8ab460e9c2906edb6eb868567fa1" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:57:49.300250 systemd[1]: Started cri-containerd-7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8.scope - libcontainer container 7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8. Nov 4 04:57:49.316860 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:57:49.355061 containerd[1635]: time="2025-11-04T04:57:49.355013704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5879fbbc68-54k5g,Uid:14be4876-4542-4022-8773-f8e166b995c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"7739914ab71a7e987a0ecd948756d79a600e645486f118235340d0be4cb3edc8\"" Nov 4 04:57:49.361380 containerd[1635]: time="2025-11-04T04:57:49.361342109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:57:49.414966 containerd[1635]: time="2025-11-04T04:57:49.414854535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb468d79-f8pbq,Uid:7b46c654-6c31-424f-ab6b-6ce8350f8d0d,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:57:49.416386 containerd[1635]: time="2025-11-04T04:57:49.416356288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lwhrn,Uid:f971ab18-a5fd-481a-b739-b1338118165c,Namespace:calico-system,Attempt:0,}" Nov 4 04:57:49.417530 kubelet[2827]: I1104 04:57:49.417488 2827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9625c40e-9448-4035-9ad8-5bb2e0d153f9" path="/var/lib/kubelet/pods/9625c40e-9448-4035-9ad8-5bb2e0d153f9/volumes" Nov 4 04:57:49.440710 systemd-networkd[1531]: vxlan.calico: Link UP Nov 4 04:57:49.440718 systemd-networkd[1531]: vxlan.calico: Gained carrier Nov 4 04:57:49.693425 containerd[1635]: time="2025-11-04T04:57:49.693234739Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:57:49.763191 containerd[1635]: time="2025-11-04T04:57:49.763026656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:57:49.763191 containerd[1635]: time="2025-11-04T04:57:49.763126537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:57:49.763709 kubelet[2827]: E1104 04:57:49.763664 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:57:49.764197 kubelet[2827]: E1104 04:57:49.763738 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:57:49.777702 systemd-networkd[1531]: cali1953f35ada7: Link UP Nov 4 04:57:49.779244 systemd-networkd[1531]: cali1953f35ada7: Gained carrier Nov 4 04:57:49.786169 kubelet[2827]: E1104 04:57:49.785236 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b917bdde86d944a1958ebd59231f5dea,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvbmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5879fbbc68-54k5g_calico-system(14be4876-4542-4022-8773-f8e166b995c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:57:49.791159 containerd[1635]: time="2025-11-04T04:57:49.791055492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:57:49.803589 containerd[1635]: 2025-11-04 04:57:49.501 [INFO][4281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bfb468d79--f8pbq-eth0 calico-apiserver-5bfb468d79- calico-apiserver 7b46c654-6c31-424f-ab6b-6ce8350f8d0d 830 0 2025-11-04 04:57:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bfb468d79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bfb468d79-f8pbq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1953f35ada7 [] [] }} ContainerID="24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-f8pbq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--f8pbq-" Nov 4 04:57:49.803589 containerd[1635]: 2025-11-04 04:57:49.502 [INFO][4281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-f8pbq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--f8pbq-eth0" Nov 4 04:57:49.803589 containerd[1635]: 2025-11-04 04:57:49.537 [INFO][4315] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" HandleID="k8s-pod-network.24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" Workload="localhost-k8s-calico--apiserver--5bfb468d79--f8pbq-eth0" Nov 4 04:57:49.804163 containerd[1635]: 2025-11-04 04:57:49.538 [INFO][4315] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" HandleID="k8s-pod-network.24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" Workload="localhost-k8s-calico--apiserver--5bfb468d79--f8pbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ede0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5bfb468d79-f8pbq", "timestamp":"2025-11-04 04:57:49.53792844 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:57:49.804163 containerd[1635]: 2025-11-04 04:57:49.538 [INFO][4315] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:57:49.804163 containerd[1635]: 2025-11-04 04:57:49.538 [INFO][4315] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:57:49.804163 containerd[1635]: 2025-11-04 04:57:49.538 [INFO][4315] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:57:49.804163 containerd[1635]: 2025-11-04 04:57:49.546 [INFO][4315] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" host="localhost" Nov 4 04:57:49.804163 containerd[1635]: 2025-11-04 04:57:49.551 [INFO][4315] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:57:49.804163 containerd[1635]: 2025-11-04 04:57:49.556 [INFO][4315] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:57:49.804163 containerd[1635]: 2025-11-04 04:57:49.558 [INFO][4315] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:49.804163 containerd[1635]: 2025-11-04 04:57:49.561 [INFO][4315] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:49.804163 containerd[1635]: 2025-11-04 04:57:49.561 [INFO][4315] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" host="localhost" Nov 4 04:57:49.804496 containerd[1635]: 2025-11-04 04:57:49.566 [INFO][4315] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a Nov 4 04:57:49.804496 containerd[1635]: 2025-11-04 04:57:49.713 [INFO][4315] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" host="localhost" Nov 4 04:57:49.804496 containerd[1635]: 2025-11-04 04:57:49.769 [INFO][4315] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" host="localhost" Nov 4 04:57:49.804496 containerd[1635]: 2025-11-04 04:57:49.769 [INFO][4315] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" host="localhost" Nov 4 04:57:49.804496 containerd[1635]: 2025-11-04 04:57:49.769 [INFO][4315] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:57:49.804496 containerd[1635]: 2025-11-04 04:57:49.769 [INFO][4315] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" HandleID="k8s-pod-network.24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" Workload="localhost-k8s-calico--apiserver--5bfb468d79--f8pbq-eth0" Nov 4 04:57:49.804682 containerd[1635]: 2025-11-04 04:57:49.773 [INFO][4281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-f8pbq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--f8pbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bfb468d79--f8pbq-eth0", GenerateName:"calico-apiserver-5bfb468d79-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b46c654-6c31-424f-ab6b-6ce8350f8d0d", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bfb468d79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bfb468d79-f8pbq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1953f35ada7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:49.804765 containerd[1635]: 2025-11-04 04:57:49.774 [INFO][4281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-f8pbq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--f8pbq-eth0" Nov 4 04:57:49.804765 containerd[1635]: 2025-11-04 04:57:49.774 [INFO][4281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1953f35ada7 ContainerID="24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-f8pbq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--f8pbq-eth0" Nov 4 04:57:49.804765 containerd[1635]: 2025-11-04 04:57:49.779 [INFO][4281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-f8pbq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--f8pbq-eth0" Nov 4 04:57:49.804926 containerd[1635]: 2025-11-04 04:57:49.780 [INFO][4281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-f8pbq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--f8pbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bfb468d79--f8pbq-eth0", GenerateName:"calico-apiserver-5bfb468d79-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b46c654-6c31-424f-ab6b-6ce8350f8d0d", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bfb468d79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a", Pod:"calico-apiserver-5bfb468d79-f8pbq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1953f35ada7", MAC:"3a:10:2c:56:db:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:49.805053 containerd[1635]: 2025-11-04 04:57:49.798 [INFO][4281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-f8pbq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--f8pbq-eth0" Nov 4 04:57:49.833089 systemd-networkd[1531]: calif7ed2291e0d: Link UP Nov 4 04:57:49.833571 systemd-networkd[1531]: calif7ed2291e0d: Gained carrier Nov 4 04:57:49.876122 containerd[1635]: time="2025-11-04T04:57:49.875992867Z" level=info msg="connecting to shim 24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a" address="unix:///run/containerd/s/4deeb45cd1e695057a483b6c56273e3467d21d046327935b214534f3d9c3fd79" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:57:49.905254 systemd[1]: Started cri-containerd-24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a.scope - libcontainer container 24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a. Nov 4 04:57:49.918929 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:57:50.105841 containerd[1635]: time="2025-11-04T04:57:50.105786753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb468d79-f8pbq,Uid:7b46c654-6c31-424f-ab6b-6ce8350f8d0d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"24b54420ef56193dd9e103370a4c4efb4389347eeb5f34aa49d7cdaf7805a70a\"" Nov 4 04:57:50.196989 containerd[1635]: time="2025-11-04T04:57:50.196910911Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:57:50.279830 containerd[1635]: 2025-11-04 04:57:49.501 [INFO][4270] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--lwhrn-eth0 goldmane-666569f655- calico-system f971ab18-a5fd-481a-b739-b1338118165c 826 0 2025-11-04 04:57:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-lwhrn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif7ed2291e0d [] [] }} ContainerID="3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" Namespace="calico-system" Pod="goldmane-666569f655-lwhrn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lwhrn-" Nov 4 04:57:50.279830 containerd[1635]: 2025-11-04 04:57:49.501 [INFO][4270] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" Namespace="calico-system" Pod="goldmane-666569f655-lwhrn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lwhrn-eth0" Nov 4 04:57:50.279830 containerd[1635]: 2025-11-04 04:57:49.538 [INFO][4313] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" HandleID="k8s-pod-network.3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" Workload="localhost-k8s-goldmane--666569f655--lwhrn-eth0" Nov 4 04:57:50.280052 containerd[1635]: 2025-11-04 04:57:49.539 [INFO][4313] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" HandleID="k8s-pod-network.3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" Workload="localhost-k8s-goldmane--666569f655--lwhrn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e790), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-lwhrn", "timestamp":"2025-11-04 04:57:49.538730785 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:57:50.280052 containerd[1635]: 2025-11-04 04:57:49.539 [INFO][4313] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:57:50.280052 containerd[1635]: 2025-11-04 04:57:49.770 [INFO][4313] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:57:50.280052 containerd[1635]: 2025-11-04 04:57:49.770 [INFO][4313] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:57:50.280052 containerd[1635]: 2025-11-04 04:57:49.779 [INFO][4313] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" host="localhost" Nov 4 04:57:50.280052 containerd[1635]: 2025-11-04 04:57:49.790 [INFO][4313] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:57:50.280052 containerd[1635]: 2025-11-04 04:57:49.804 [INFO][4313] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:57:50.280052 containerd[1635]: 2025-11-04 04:57:49.808 [INFO][4313] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:50.280052 containerd[1635]: 2025-11-04 04:57:49.811 [INFO][4313] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:50.280052 containerd[1635]: 2025-11-04 04:57:49.811 [INFO][4313] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" host="localhost" Nov 4 04:57:50.280310 containerd[1635]: 2025-11-04 04:57:49.813 [INFO][4313] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d Nov 4 04:57:50.280310 containerd[1635]: 2025-11-04 04:57:49.819 [INFO][4313] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" host="localhost" Nov 4 04:57:50.280310 containerd[1635]: 2025-11-04 04:57:49.826 [INFO][4313] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" host="localhost" Nov 4 04:57:50.280310 containerd[1635]: 2025-11-04 04:57:49.826 [INFO][4313] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" host="localhost" Nov 4 04:57:50.280310 containerd[1635]: 2025-11-04 04:57:49.826 [INFO][4313] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:57:50.280310 containerd[1635]: 2025-11-04 04:57:49.826 [INFO][4313] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" HandleID="k8s-pod-network.3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" Workload="localhost-k8s-goldmane--666569f655--lwhrn-eth0" Nov 4 04:57:50.280492 containerd[1635]: 2025-11-04 04:57:49.830 [INFO][4270] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" Namespace="calico-system" Pod="goldmane-666569f655-lwhrn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lwhrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--lwhrn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f971ab18-a5fd-481a-b739-b1338118165c", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-lwhrn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif7ed2291e0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:50.280492 containerd[1635]: 2025-11-04 04:57:49.830 [INFO][4270] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" Namespace="calico-system" Pod="goldmane-666569f655-lwhrn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lwhrn-eth0" Nov 4 04:57:50.280563 containerd[1635]: 2025-11-04 04:57:49.830 [INFO][4270] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7ed2291e0d ContainerID="3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" Namespace="calico-system" Pod="goldmane-666569f655-lwhrn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lwhrn-eth0" Nov 4 04:57:50.280563 containerd[1635]: 2025-11-04 04:57:49.833 [INFO][4270] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" Namespace="calico-system" Pod="goldmane-666569f655-lwhrn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lwhrn-eth0" Nov 4 04:57:50.280614 containerd[1635]: 2025-11-04 04:57:49.839 [INFO][4270] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" Namespace="calico-system" Pod="goldmane-666569f655-lwhrn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lwhrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--lwhrn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f971ab18-a5fd-481a-b739-b1338118165c", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d", Pod:"goldmane-666569f655-lwhrn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif7ed2291e0d", MAC:"e2:d4:14:3b:86:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:50.280670 containerd[1635]: 2025-11-04 04:57:50.274 [INFO][4270] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" Namespace="calico-system" Pod="goldmane-666569f655-lwhrn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lwhrn-eth0" Nov 4 04:57:50.282851 containerd[1635]: time="2025-11-04T04:57:50.282782643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:57:50.282936 containerd[1635]: time="2025-11-04T04:57:50.282913712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:57:50.283477 kubelet[2827]: E1104 04:57:50.283401 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:57:50.283550 kubelet[2827]: E1104 04:57:50.283513 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:57:50.283824 kubelet[2827]: E1104 04:57:50.283740 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvbmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5879fbbc68-54k5g_calico-system(14be4876-4542-4022-8773-f8e166b995c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:57:50.284716 containerd[1635]: time="2025-11-04T04:57:50.284686920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:57:50.285184 kubelet[2827]: E1104 04:57:50.285142 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5879fbbc68-54k5g" podUID="14be4876-4542-4022-8773-f8e166b995c8" Nov 4 04:57:50.347145 containerd[1635]: time="2025-11-04T04:57:50.347060070Z" level=info msg="connecting to shim 3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d" address="unix:///run/containerd/s/cf7e8a71b2b8d7ee9d5394b6ed5764e39d46de735a6fd92a27e9175733fd6ed9" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:57:50.372291 systemd[1]: Started cri-containerd-3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d.scope - libcontainer container 3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d. Nov 4 04:57:50.387614 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:57:50.414051 kubelet[2827]: E1104 04:57:50.413601 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:50.414579 containerd[1635]: time="2025-11-04T04:57:50.414523386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c5grg,Uid:d905a9a1-a257-4351-bcbd-500992ed04d4,Namespace:kube-system,Attempt:0,}" Nov 4 04:57:50.434913 containerd[1635]: time="2025-11-04T04:57:50.434728132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lwhrn,Uid:f971ab18-a5fd-481a-b739-b1338118165c,Namespace:calico-system,Attempt:0,} returns sandbox id \"3439505fb17fe603913e231c2b358a251eb150a580e9fbdba072a9556c33043d\"" Nov 4 04:57:50.460274 systemd-networkd[1531]: cali28c4ee8a234: Gained IPv6LL Nov 4 04:57:50.540894 systemd-networkd[1531]: cali8734594b551: Link UP Nov 4 04:57:50.541115 systemd-networkd[1531]: cali8734594b551: Gained carrier Nov 4 04:57:50.557921 containerd[1635]: 2025-11-04 04:57:50.469 [INFO][4471] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--c5grg-eth0 coredns-668d6bf9bc- kube-system d905a9a1-a257-4351-bcbd-500992ed04d4 818 0 2025-11-04 04:57:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-c5grg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8734594b551 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5grg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c5grg-" Nov 4 04:57:50.557921 containerd[1635]: 2025-11-04 04:57:50.469 [INFO][4471] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5grg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c5grg-eth0" Nov 4 04:57:50.557921 containerd[1635]: 2025-11-04 04:57:50.495 [INFO][4491] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" HandleID="k8s-pod-network.228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" Workload="localhost-k8s-coredns--668d6bf9bc--c5grg-eth0" Nov 4 04:57:50.558186 containerd[1635]: 2025-11-04 04:57:50.496 [INFO][4491] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" HandleID="k8s-pod-network.228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" Workload="localhost-k8s-coredns--668d6bf9bc--c5grg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7000), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-c5grg", "timestamp":"2025-11-04 04:57:50.495914989 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:57:50.558186 containerd[1635]: 2025-11-04 04:57:50.496 [INFO][4491] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:57:50.558186 containerd[1635]: 2025-11-04 04:57:50.496 [INFO][4491] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:57:50.558186 containerd[1635]: 2025-11-04 04:57:50.496 [INFO][4491] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:57:50.558186 containerd[1635]: 2025-11-04 04:57:50.503 [INFO][4491] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" host="localhost" Nov 4 04:57:50.558186 containerd[1635]: 2025-11-04 04:57:50.509 [INFO][4491] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:57:50.558186 containerd[1635]: 2025-11-04 04:57:50.514 [INFO][4491] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:57:50.558186 containerd[1635]: 2025-11-04 04:57:50.516 [INFO][4491] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:50.558186 containerd[1635]: 2025-11-04 04:57:50.519 [INFO][4491] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:50.558186 containerd[1635]: 2025-11-04 04:57:50.519 [INFO][4491] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" host="localhost" Nov 4 04:57:50.558410 containerd[1635]: 2025-11-04 04:57:50.521 [INFO][4491] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797 Nov 4 04:57:50.558410 containerd[1635]: 2025-11-04 04:57:50.526 [INFO][4491] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" host="localhost" Nov 4 04:57:50.558410 containerd[1635]: 2025-11-04 04:57:50.532 [INFO][4491] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" host="localhost" Nov 4 04:57:50.558410 containerd[1635]: 2025-11-04 04:57:50.532 [INFO][4491] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" host="localhost" Nov 4 04:57:50.558410 containerd[1635]: 2025-11-04 04:57:50.532 [INFO][4491] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:57:50.558410 containerd[1635]: 2025-11-04 04:57:50.532 [INFO][4491] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" HandleID="k8s-pod-network.228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" Workload="localhost-k8s-coredns--668d6bf9bc--c5grg-eth0" Nov 4 04:57:50.558535 containerd[1635]: 2025-11-04 04:57:50.536 [INFO][4471] cni-plugin/k8s.go 418: Populated endpoint ContainerID="228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5grg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c5grg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--c5grg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d905a9a1-a257-4351-bcbd-500992ed04d4", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-c5grg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8734594b551", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:50.558591 containerd[1635]: 2025-11-04 04:57:50.536 [INFO][4471] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5grg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c5grg-eth0" Nov 4 04:57:50.558591 containerd[1635]: 2025-11-04 04:57:50.536 [INFO][4471] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8734594b551 ContainerID="228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5grg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c5grg-eth0" Nov 4 04:57:50.558591 containerd[1635]: 2025-11-04 04:57:50.541 [INFO][4471] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5grg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c5grg-eth0" Nov 4 04:57:50.558673 containerd[1635]: 2025-11-04 04:57:50.542 [INFO][4471] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5grg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c5grg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--c5grg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d905a9a1-a257-4351-bcbd-500992ed04d4", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797", Pod:"coredns-668d6bf9bc-c5grg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8734594b551", MAC:"2a:e2:8d:51:96:61", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:50.558673 containerd[1635]: 2025-11-04 04:57:50.553 [INFO][4471] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" Namespace="kube-system" Pod="coredns-668d6bf9bc-c5grg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--c5grg-eth0" Nov 4 04:57:50.588302 containerd[1635]: time="2025-11-04T04:57:50.588240027Z" level=info msg="connecting to shim 228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797" address="unix:///run/containerd/s/906131f4b63901bb001a2f741ded4eb389bd7a8c453378e65c0da9695e5780e2" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:57:50.600485 kubelet[2827]: E1104 04:57:50.600421 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5879fbbc68-54k5g" podUID="14be4876-4542-4022-8773-f8e166b995c8" Nov 4 04:57:50.622361 systemd[1]: Started cri-containerd-228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797.scope - libcontainer container 228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797. Nov 4 04:57:50.642704 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:57:50.677345 containerd[1635]: time="2025-11-04T04:57:50.677297349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c5grg,Uid:d905a9a1-a257-4351-bcbd-500992ed04d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797\"" Nov 4 04:57:50.678259 kubelet[2827]: E1104 04:57:50.678230 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:50.680724 containerd[1635]: time="2025-11-04T04:57:50.680683080Z" level=info msg="CreateContainer within sandbox \"228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 04:57:50.701860 containerd[1635]: time="2025-11-04T04:57:50.701790010Z" level=info msg="Container 8c012411b147d653dc2c9df58ed361e627e281ff3b31d4dc89e74fb414ee8656: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:57:50.718220 containerd[1635]: time="2025-11-04T04:57:50.718159933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:57:50.718517 containerd[1635]: time="2025-11-04T04:57:50.718478187Z" level=info msg="CreateContainer within sandbox \"228d62c8a72d947b624619d3cf3c0ba2acc69dbc89ac3ba4f1ac18e4070cd797\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c012411b147d653dc2c9df58ed361e627e281ff3b31d4dc89e74fb414ee8656\"" Nov 4 04:57:50.720133 containerd[1635]: time="2025-11-04T04:57:50.719135255Z" level=info msg="StartContainer for \"8c012411b147d653dc2c9df58ed361e627e281ff3b31d4dc89e74fb414ee8656\"" Nov 4 04:57:50.720133 containerd[1635]: time="2025-11-04T04:57:50.719371363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:57:50.720133 containerd[1635]: time="2025-11-04T04:57:50.719445434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:57:50.720133 containerd[1635]: time="2025-11-04T04:57:50.719976353Z" level=info msg="connecting to shim 8c012411b147d653dc2c9df58ed361e627e281ff3b31d4dc89e74fb414ee8656" address="unix:///run/containerd/s/906131f4b63901bb001a2f741ded4eb389bd7a8c453378e65c0da9695e5780e2" protocol=ttrpc version=3 Nov 4 04:57:50.720399 kubelet[2827]: E1104 04:57:50.719594 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:57:50.720399 kubelet[2827]: E1104 04:57:50.719654 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:57:50.720399 kubelet[2827]: E1104 04:57:50.719909 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnlqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bfb468d79-f8pbq_calico-apiserver(7b46c654-6c31-424f-ab6b-6ce8350f8d0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:57:50.720652 containerd[1635]: time="2025-11-04T04:57:50.720438600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 04:57:50.721279 kubelet[2827]: E1104 04:57:50.721209 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-f8pbq" podUID="7b46c654-6c31-424f-ab6b-6ce8350f8d0d" Nov 4 04:57:50.743336 systemd[1]: Started cri-containerd-8c012411b147d653dc2c9df58ed361e627e281ff3b31d4dc89e74fb414ee8656.scope - libcontainer container 8c012411b147d653dc2c9df58ed361e627e281ff3b31d4dc89e74fb414ee8656. Nov 4 04:57:50.799319 containerd[1635]: time="2025-11-04T04:57:50.799268016Z" level=info msg="StartContainer for \"8c012411b147d653dc2c9df58ed361e627e281ff3b31d4dc89e74fb414ee8656\" returns successfully" Nov 4 04:57:50.972884 systemd-networkd[1531]: calif7ed2291e0d: Gained IPv6LL Nov 4 04:57:50.973440 systemd-networkd[1531]: vxlan.calico: Gained IPv6LL Nov 4 04:57:51.086496 containerd[1635]: time="2025-11-04T04:57:51.086395789Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:57:51.092799 containerd[1635]: time="2025-11-04T04:57:51.092677557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 04:57:51.093050 containerd[1635]: time="2025-11-04T04:57:51.092749122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 04:57:51.093611 kubelet[2827]: E1104 04:57:51.093488 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:57:51.093611 kubelet[2827]: E1104 04:57:51.093605 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:57:51.096049 kubelet[2827]: E1104 04:57:51.095886 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntjpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lwhrn_calico-system(f971ab18-a5fd-481a-b739-b1338118165c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 04:57:51.097608 kubelet[2827]: E1104 04:57:51.097545 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lwhrn" podUID="f971ab18-a5fd-481a-b739-b1338118165c" Nov 4 04:57:51.418269 containerd[1635]: time="2025-11-04T04:57:51.416540757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56bb65b864-t4njh,Uid:12098cb7-6382-4a20-b151-e09bfda5e484,Namespace:calico-system,Attempt:0,}" Nov 4 04:57:51.421241 containerd[1635]: time="2025-11-04T04:57:51.416742741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4jsk,Uid:81cc34c3-6e55-409f-a691-f7248edc74db,Namespace:calico-system,Attempt:0,}" Nov 4 04:57:51.606924 kubelet[2827]: E1104 04:57:51.606707 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:51.607125 kubelet[2827]: E1104 04:57:51.607008 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-f8pbq" podUID="7b46c654-6c31-424f-ab6b-6ce8350f8d0d" Nov 4 04:57:51.613026 kubelet[2827]: E1104 04:57:51.612936 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lwhrn" podUID="f971ab18-a5fd-481a-b739-b1338118165c" Nov 4 04:57:51.739652 systemd-networkd[1531]: cali1953f35ada7: Gained IPv6LL Nov 4 04:57:51.742396 kubelet[2827]: I1104 04:57:51.742306 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-c5grg" podStartSLOduration=44.740234736 podStartE2EDuration="44.740234736s" podCreationTimestamp="2025-11-04 04:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:57:51.655243493 +0000 UTC m=+50.336091077" watchObservedRunningTime="2025-11-04 04:57:51.740234736 +0000 UTC m=+50.421082310" Nov 4 04:57:51.807507 systemd-networkd[1531]: cali2c1bf16e31d: Link UP Nov 4 04:57:51.808887 systemd-networkd[1531]: cali2c1bf16e31d: Gained carrier Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.623 [INFO][4606] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--56bb65b864--t4njh-eth0 calico-kube-controllers-56bb65b864- calico-system 12098cb7-6382-4a20-b151-e09bfda5e484 828 0 2025-11-04 04:57:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56bb65b864 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-56bb65b864-t4njh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2c1bf16e31d [] [] }} ContainerID="7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" Namespace="calico-system" Pod="calico-kube-controllers-56bb65b864-t4njh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bb65b864--t4njh-" Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.623 [INFO][4606] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" Namespace="calico-system" Pod="calico-kube-controllers-56bb65b864-t4njh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bb65b864--t4njh-eth0" Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.671 [INFO][4628] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" HandleID="k8s-pod-network.7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" Workload="localhost-k8s-calico--kube--controllers--56bb65b864--t4njh-eth0" Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.671 [INFO][4628] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" HandleID="k8s-pod-network.7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" Workload="localhost-k8s-calico--kube--controllers--56bb65b864--t4njh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-56bb65b864-t4njh", "timestamp":"2025-11-04 04:57:51.671673048 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.671 [INFO][4628] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.672 [INFO][4628] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.672 [INFO][4628] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.739 [INFO][4628] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" host="localhost" Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.773 [INFO][4628] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.781 [INFO][4628] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.785 [INFO][4628] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.788 [INFO][4628] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.788 [INFO][4628] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" host="localhost" Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.791 [INFO][4628] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666 Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.795 [INFO][4628] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" host="localhost" Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.801 [INFO][4628] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" host="localhost" Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.801 [INFO][4628] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" host="localhost" Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.801 [INFO][4628] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:57:51.915742 containerd[1635]: 2025-11-04 04:57:51.801 [INFO][4628] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" HandleID="k8s-pod-network.7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" Workload="localhost-k8s-calico--kube--controllers--56bb65b864--t4njh-eth0" Nov 4 04:57:51.917126 containerd[1635]: 2025-11-04 04:57:51.805 [INFO][4606] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" Namespace="calico-system" Pod="calico-kube-controllers-56bb65b864-t4njh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bb65b864--t4njh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56bb65b864--t4njh-eth0", GenerateName:"calico-kube-controllers-56bb65b864-", Namespace:"calico-system", SelfLink:"", UID:"12098cb7-6382-4a20-b151-e09bfda5e484", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56bb65b864", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-56bb65b864-t4njh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2c1bf16e31d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:51.917126 containerd[1635]: 2025-11-04 04:57:51.805 [INFO][4606] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" Namespace="calico-system" Pod="calico-kube-controllers-56bb65b864-t4njh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bb65b864--t4njh-eth0" Nov 4 04:57:51.917126 containerd[1635]: 2025-11-04 04:57:51.805 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c1bf16e31d ContainerID="7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" Namespace="calico-system" Pod="calico-kube-controllers-56bb65b864-t4njh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bb65b864--t4njh-eth0" Nov 4 04:57:51.917126 containerd[1635]: 2025-11-04 04:57:51.808 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" Namespace="calico-system" Pod="calico-kube-controllers-56bb65b864-t4njh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bb65b864--t4njh-eth0" Nov 4 04:57:51.917126 containerd[1635]: 2025-11-04 04:57:51.809 [INFO][4606] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" Namespace="calico-system" Pod="calico-kube-controllers-56bb65b864-t4njh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bb65b864--t4njh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56bb65b864--t4njh-eth0", GenerateName:"calico-kube-controllers-56bb65b864-", Namespace:"calico-system", SelfLink:"", UID:"12098cb7-6382-4a20-b151-e09bfda5e484", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56bb65b864", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666", Pod:"calico-kube-controllers-56bb65b864-t4njh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2c1bf16e31d", MAC:"42:9f:d3:33:10:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:51.917126 containerd[1635]: 2025-11-04 04:57:51.912 [INFO][4606] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" Namespace="calico-system" Pod="calico-kube-controllers-56bb65b864-t4njh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56bb65b864--t4njh-eth0" Nov 4 04:57:51.977144 systemd-networkd[1531]: cali55282698001: Link UP Nov 4 04:57:51.978029 systemd-networkd[1531]: cali55282698001: Gained carrier Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.628 [INFO][4598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--t4jsk-eth0 csi-node-driver- calico-system 81cc34c3-6e55-409f-a691-f7248edc74db 712 0 2025-11-04 04:57:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-t4jsk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali55282698001 [] [] }} ContainerID="ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" Namespace="calico-system" Pod="csi-node-driver-t4jsk" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4jsk-" Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.629 [INFO][4598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" Namespace="calico-system" Pod="csi-node-driver-t4jsk" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4jsk-eth0" Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.689 [INFO][4636] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" HandleID="k8s-pod-network.ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" Workload="localhost-k8s-csi--node--driver--t4jsk-eth0" Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.689 [INFO][4636] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" HandleID="k8s-pod-network.ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" Workload="localhost-k8s-csi--node--driver--t4jsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001128f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-t4jsk", "timestamp":"2025-11-04 04:57:51.689331075 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.689 [INFO][4636] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.801 [INFO][4636] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.801 [INFO][4636] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.913 [INFO][4636] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" host="localhost" Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.921 [INFO][4636] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.927 [INFO][4636] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.929 [INFO][4636] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.932 [INFO][4636] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.932 [INFO][4636] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" host="localhost" Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.934 [INFO][4636] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613 Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.948 [INFO][4636] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" host="localhost" Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.967 [INFO][4636] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" host="localhost" Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.967 [INFO][4636] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" host="localhost" Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.967 [INFO][4636] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:57:52.001378 containerd[1635]: 2025-11-04 04:57:51.967 [INFO][4636] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" HandleID="k8s-pod-network.ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" Workload="localhost-k8s-csi--node--driver--t4jsk-eth0" Nov 4 04:57:52.002127 containerd[1635]: 2025-11-04 04:57:51.971 [INFO][4598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" Namespace="calico-system" Pod="csi-node-driver-t4jsk" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4jsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t4jsk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"81cc34c3-6e55-409f-a691-f7248edc74db", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-t4jsk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali55282698001", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:52.002127 containerd[1635]: 2025-11-04 04:57:51.971 [INFO][4598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" Namespace="calico-system" Pod="csi-node-driver-t4jsk" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4jsk-eth0" Nov 4 04:57:52.002127 containerd[1635]: 2025-11-04 04:57:51.971 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55282698001 ContainerID="ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" Namespace="calico-system" Pod="csi-node-driver-t4jsk" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4jsk-eth0" Nov 4 04:57:52.002127 containerd[1635]: 2025-11-04 04:57:51.978 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" Namespace="calico-system" Pod="csi-node-driver-t4jsk" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4jsk-eth0" Nov 4 04:57:52.002127 containerd[1635]: 2025-11-04 04:57:51.978 [INFO][4598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" Namespace="calico-system" Pod="csi-node-driver-t4jsk" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4jsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t4jsk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"81cc34c3-6e55-409f-a691-f7248edc74db", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613", Pod:"csi-node-driver-t4jsk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali55282698001", MAC:"8a:95:4d:97:38:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:52.002127 containerd[1635]: 2025-11-04 04:57:51.996 [INFO][4598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" Namespace="calico-system" Pod="csi-node-driver-t4jsk" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4jsk-eth0" Nov 4 04:57:52.066942 containerd[1635]: time="2025-11-04T04:57:52.066864648Z" level=info msg="connecting to shim 7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666" address="unix:///run/containerd/s/22fbcf5925cf755fd8158aaf3bdb8d096e8dc614e3dcb26c657022c5888b8440" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:57:52.099320 systemd[1]: Started cri-containerd-7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666.scope - libcontainer container 7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666. Nov 4 04:57:52.108996 containerd[1635]: time="2025-11-04T04:57:52.108939840Z" level=info msg="connecting to shim ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613" address="unix:///run/containerd/s/6281d802687fc32460f53ee836796fe8074d42230c591ebed1b7329ae9b70645" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:57:52.121382 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:57:52.138715 systemd[1]: Started cri-containerd-ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613.scope - libcontainer container ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613. Nov 4 04:57:52.159362 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:57:52.162036 containerd[1635]: time="2025-11-04T04:57:52.161988419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56bb65b864-t4njh,Uid:12098cb7-6382-4a20-b151-e09bfda5e484,Namespace:calico-system,Attempt:0,} returns sandbox id \"7505df6a239070bb3074e5dca2a637f82dbd8bd4bb4206c83dcbe278e4384666\"" Nov 4 04:57:52.164654 containerd[1635]: time="2025-11-04T04:57:52.164594726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:57:52.180974 containerd[1635]: time="2025-11-04T04:57:52.180892560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4jsk,Uid:81cc34c3-6e55-409f-a691-f7248edc74db,Namespace:calico-system,Attempt:0,} returns sandbox id \"ecdc4fffca07050eac801bc4c8b685efc9378ece4e3509b820fb115604ccd613\"" Nov 4 04:57:52.251391 systemd-networkd[1531]: cali8734594b551: Gained IPv6LL Nov 4 04:57:52.414909 containerd[1635]: time="2025-11-04T04:57:52.414834037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb468d79-vxx5m,Uid:5973885e-fb9f-4950-a4af-55889b504742,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:57:52.481976 containerd[1635]: time="2025-11-04T04:57:52.481919058Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:57:52.484509 containerd[1635]: time="2025-11-04T04:57:52.484432619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:57:52.484635 containerd[1635]: time="2025-11-04T04:57:52.484451605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:57:52.485025 kubelet[2827]: E1104 04:57:52.484959 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:57:52.485317 kubelet[2827]: E1104 04:57:52.485046 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:57:52.487119 kubelet[2827]: E1104 04:57:52.485318 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4dx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56bb65b864-t4njh_calico-system(12098cb7-6382-4a20-b151-e09bfda5e484): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:57:52.487119 kubelet[2827]: E1104 04:57:52.486981 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56bb65b864-t4njh" podUID="12098cb7-6382-4a20-b151-e09bfda5e484" Nov 4 04:57:52.487288 containerd[1635]: time="2025-11-04T04:57:52.485450731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 04:57:52.546173 systemd-networkd[1531]: cali6da96c12e3a: Link UP Nov 4 04:57:52.547007 systemd-networkd[1531]: cali6da96c12e3a: Gained carrier Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.459 [INFO][4759] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bfb468d79--vxx5m-eth0 calico-apiserver-5bfb468d79- calico-apiserver 5973885e-fb9f-4950-a4af-55889b504742 824 0 2025-11-04 04:57:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bfb468d79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bfb468d79-vxx5m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6da96c12e3a [] [] }} ContainerID="b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-vxx5m" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--vxx5m-" Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.459 [INFO][4759] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-vxx5m" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--vxx5m-eth0" Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.497 [INFO][4774] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" HandleID="k8s-pod-network.b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" Workload="localhost-k8s-calico--apiserver--5bfb468d79--vxx5m-eth0" Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.498 [INFO][4774] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" HandleID="k8s-pod-network.b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" Workload="localhost-k8s-calico--apiserver--5bfb468d79--vxx5m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138da0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5bfb468d79-vxx5m", "timestamp":"2025-11-04 04:57:52.497837753 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.498 [INFO][4774] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.498 [INFO][4774] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.498 [INFO][4774] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.505 [INFO][4774] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" host="localhost" Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.511 [INFO][4774] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.516 [INFO][4774] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.520 [INFO][4774] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.524 [INFO][4774] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.524 [INFO][4774] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" host="localhost" Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.525 [INFO][4774] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1 Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.530 [INFO][4774] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" host="localhost" Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.538 [INFO][4774] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" host="localhost" Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.538 [INFO][4774] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" host="localhost" Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.538 [INFO][4774] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:57:52.563861 containerd[1635]: 2025-11-04 04:57:52.538 [INFO][4774] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" HandleID="k8s-pod-network.b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" Workload="localhost-k8s-calico--apiserver--5bfb468d79--vxx5m-eth0" Nov 4 04:57:52.564572 containerd[1635]: 2025-11-04 04:57:52.543 [INFO][4759] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-vxx5m" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--vxx5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bfb468d79--vxx5m-eth0", GenerateName:"calico-apiserver-5bfb468d79-", Namespace:"calico-apiserver", SelfLink:"", UID:"5973885e-fb9f-4950-a4af-55889b504742", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bfb468d79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bfb468d79-vxx5m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6da96c12e3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:52.564572 containerd[1635]: 2025-11-04 04:57:52.543 [INFO][4759] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-vxx5m" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--vxx5m-eth0" Nov 4 04:57:52.564572 containerd[1635]: 2025-11-04 04:57:52.543 [INFO][4759] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6da96c12e3a ContainerID="b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-vxx5m" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--vxx5m-eth0" Nov 4 04:57:52.564572 containerd[1635]: 2025-11-04 04:57:52.547 [INFO][4759] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-vxx5m" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--vxx5m-eth0" Nov 4 04:57:52.564572 containerd[1635]: 2025-11-04 04:57:52.548 [INFO][4759] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-vxx5m" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--vxx5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bfb468d79--vxx5m-eth0", GenerateName:"calico-apiserver-5bfb468d79-", Namespace:"calico-apiserver", SelfLink:"", UID:"5973885e-fb9f-4950-a4af-55889b504742", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bfb468d79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1", Pod:"calico-apiserver-5bfb468d79-vxx5m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6da96c12e3a", MAC:"b2:a0:02:06:fb:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:52.564572 containerd[1635]: 2025-11-04 04:57:52.559 [INFO][4759] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" Namespace="calico-apiserver" Pod="calico-apiserver-5bfb468d79-vxx5m" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bfb468d79--vxx5m-eth0" Nov 4 04:57:52.602529 containerd[1635]: time="2025-11-04T04:57:52.602443111Z" level=info msg="connecting to shim b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1" address="unix:///run/containerd/s/49ef915d8d4d664b5a1c435a577ec1831e983b675bafaf40c767dc431af7ef6e" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:57:52.610188 kubelet[2827]: E1104 04:57:52.610086 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56bb65b864-t4njh" podUID="12098cb7-6382-4a20-b151-e09bfda5e484" Nov 4 04:57:52.611806 kubelet[2827]: E1104 04:57:52.611732 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:52.638332 systemd[1]: Started cri-containerd-b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1.scope - libcontainer container b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1. Nov 4 04:57:52.658967 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:57:52.702829 containerd[1635]: time="2025-11-04T04:57:52.702762864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bfb468d79-vxx5m,Uid:5973885e-fb9f-4950-a4af-55889b504742,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b22eaa8b6c7c4aa572e026b1ddb727fcd1a00b51e4f0efc4637881cf8db66bb1\"" Nov 4 04:57:52.964517 containerd[1635]: time="2025-11-04T04:57:52.964319957Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:57:52.966423 containerd[1635]: time="2025-11-04T04:57:52.966341204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 04:57:52.966423 containerd[1635]: time="2025-11-04T04:57:52.966394375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 04:57:52.966784 kubelet[2827]: E1104 04:57:52.966726 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:57:52.966857 kubelet[2827]: E1104 04:57:52.966793 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:57:52.967231 kubelet[2827]: E1104 04:57:52.967175 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mjjtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t4jsk_calico-system(81cc34c3-6e55-409f-a691-f7248edc74db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 04:57:52.967372 containerd[1635]: time="2025-11-04T04:57:52.967290205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:57:53.275423 systemd-networkd[1531]: cali2c1bf16e31d: Gained IPv6LL Nov 4 04:57:53.294704 containerd[1635]: time="2025-11-04T04:57:53.294603526Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:57:53.296966 containerd[1635]: time="2025-11-04T04:57:53.296871730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:57:53.297064 containerd[1635]: time="2025-11-04T04:57:53.296967160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:57:53.297316 kubelet[2827]: E1104 04:57:53.297236 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:57:53.297396 kubelet[2827]: E1104 04:57:53.297332 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:57:53.297716 kubelet[2827]: E1104 04:57:53.297606 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h7zlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bfb468d79-vxx5m_calico-apiserver(5973885e-fb9f-4950-a4af-55889b504742): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:57:53.297921 containerd[1635]: time="2025-11-04T04:57:53.297847511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 04:57:53.299414 kubelet[2827]: E1104 04:57:53.299299 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-vxx5m" podUID="5973885e-fb9f-4950-a4af-55889b504742" Nov 4 04:57:53.414050 kubelet[2827]: E1104 04:57:53.413979 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:53.414640 containerd[1635]: time="2025-11-04T04:57:53.414589924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dnpmk,Uid:50772838-873f-4a8d-accc-2159be973082,Namespace:kube-system,Attempt:0,}" Nov 4 04:57:53.614869 kubelet[2827]: E1104 04:57:53.614742 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-vxx5m" podUID="5973885e-fb9f-4950-a4af-55889b504742" Nov 4 04:57:53.615443 kubelet[2827]: E1104 04:57:53.614768 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56bb65b864-t4njh" podUID="12098cb7-6382-4a20-b151-e09bfda5e484" Nov 4 04:57:53.615625 kubelet[2827]: E1104 04:57:53.615448 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:53.672175 containerd[1635]: time="2025-11-04T04:57:53.672070119Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:57:53.723380 systemd-networkd[1531]: cali55282698001: Gained IPv6LL Nov 4 04:57:53.858158 containerd[1635]: time="2025-11-04T04:57:53.858071797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 04:57:53.858158 containerd[1635]: time="2025-11-04T04:57:53.858148753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 04:57:53.858391 kubelet[2827]: E1104 04:57:53.858349 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:57:53.858439 kubelet[2827]: E1104 04:57:53.858406 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:57:53.858553 kubelet[2827]: E1104 04:57:53.858521 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mjjtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t4jsk_calico-system(81cc34c3-6e55-409f-a691-f7248edc74db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 04:57:53.859746 kubelet[2827]: E1104 04:57:53.859704 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t4jsk" podUID="81cc34c3-6e55-409f-a691-f7248edc74db" Nov 4 04:57:54.052876 systemd-networkd[1531]: calif45db7c0d99: Link UP Nov 4 04:57:54.054255 systemd-networkd[1531]: calif45db7c0d99: Gained carrier Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:53.956 [INFO][4839] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--dnpmk-eth0 coredns-668d6bf9bc- kube-system 50772838-873f-4a8d-accc-2159be973082 829 0 2025-11-04 04:57:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-dnpmk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif45db7c0d99 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnpmk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnpmk-" Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:53.957 [INFO][4839] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnpmk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnpmk-eth0" Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.000 [INFO][4855] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" HandleID="k8s-pod-network.4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" Workload="localhost-k8s-coredns--668d6bf9bc--dnpmk-eth0" Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.001 [INFO][4855] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" HandleID="k8s-pod-network.4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" Workload="localhost-k8s-coredns--668d6bf9bc--dnpmk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ee10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-dnpmk", "timestamp":"2025-11-04 04:57:54.000737027 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.001 [INFO][4855] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.001 [INFO][4855] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.001 [INFO][4855] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.008 [INFO][4855] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" host="localhost" Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.012 [INFO][4855] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.023 [INFO][4855] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.025 [INFO][4855] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.028 [INFO][4855] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.028 [INFO][4855] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" host="localhost" Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.029 [INFO][4855] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.037 [INFO][4855] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" host="localhost" Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.044 [INFO][4855] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" host="localhost" Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.044 [INFO][4855] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" host="localhost" Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.044 [INFO][4855] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:57:54.076371 containerd[1635]: 2025-11-04 04:57:54.044 [INFO][4855] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" HandleID="k8s-pod-network.4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" Workload="localhost-k8s-coredns--668d6bf9bc--dnpmk-eth0" Nov 4 04:57:54.076958 containerd[1635]: 2025-11-04 04:57:54.048 [INFO][4839] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnpmk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnpmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dnpmk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"50772838-873f-4a8d-accc-2159be973082", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-dnpmk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif45db7c0d99", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:54.076958 containerd[1635]: 2025-11-04 04:57:54.048 [INFO][4839] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnpmk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnpmk-eth0" Nov 4 04:57:54.076958 containerd[1635]: 2025-11-04 04:57:54.048 [INFO][4839] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif45db7c0d99 ContainerID="4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnpmk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnpmk-eth0" Nov 4 04:57:54.076958 containerd[1635]: 2025-11-04 04:57:54.055 [INFO][4839] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnpmk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnpmk-eth0" Nov 4 04:57:54.076958 containerd[1635]: 2025-11-04 04:57:54.056 [INFO][4839] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnpmk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnpmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dnpmk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"50772838-873f-4a8d-accc-2159be973082", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 57, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d", Pod:"coredns-668d6bf9bc-dnpmk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif45db7c0d99", MAC:"22:ad:d0:0b:9e:dc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:57:54.076958 containerd[1635]: 2025-11-04 04:57:54.069 [INFO][4839] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnpmk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnpmk-eth0" Nov 4 04:57:54.112127 containerd[1635]: time="2025-11-04T04:57:54.112032438Z" level=info msg="connecting to shim 4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d" address="unix:///run/containerd/s/fa0f2a01a2ab50ff06da0f5936e6490be5ac2a7b65493eb7aba2cc515d12f97f" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:57:54.148459 systemd[1]: Started cri-containerd-4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d.scope - libcontainer container 4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d. Nov 4 04:57:54.170615 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 04:57:54.293302 containerd[1635]: time="2025-11-04T04:57:54.293230383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dnpmk,Uid:50772838-873f-4a8d-accc-2159be973082,Namespace:kube-system,Attempt:0,} returns sandbox id \"4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d\"" Nov 4 04:57:54.294344 kubelet[2827]: E1104 04:57:54.294290 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:54.296626 containerd[1635]: time="2025-11-04T04:57:54.296570008Z" level=info msg="CreateContainer within sandbox \"4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 04:57:54.312776 containerd[1635]: time="2025-11-04T04:57:54.310898587Z" level=info msg="Container 7443796c4498901cb0d2bf7eedb61419191ee2080bd1af54ee7b601eabea9fcc: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:57:54.318620 containerd[1635]: time="2025-11-04T04:57:54.318573075Z" level=info msg="CreateContainer within sandbox \"4842ae428679b64cb277636891caa07e7172629fe4daad7d2fbe22ee4895b99d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7443796c4498901cb0d2bf7eedb61419191ee2080bd1af54ee7b601eabea9fcc\"" Nov 4 04:57:54.319322 containerd[1635]: time="2025-11-04T04:57:54.319219210Z" level=info msg="StartContainer for \"7443796c4498901cb0d2bf7eedb61419191ee2080bd1af54ee7b601eabea9fcc\"" Nov 4 04:57:54.320333 containerd[1635]: time="2025-11-04T04:57:54.320305651Z" level=info msg="connecting to shim 7443796c4498901cb0d2bf7eedb61419191ee2080bd1af54ee7b601eabea9fcc" address="unix:///run/containerd/s/fa0f2a01a2ab50ff06da0f5936e6490be5ac2a7b65493eb7aba2cc515d12f97f" protocol=ttrpc version=3 Nov 4 04:57:54.347315 systemd[1]: Started cri-containerd-7443796c4498901cb0d2bf7eedb61419191ee2080bd1af54ee7b601eabea9fcc.scope - libcontainer container 7443796c4498901cb0d2bf7eedb61419191ee2080bd1af54ee7b601eabea9fcc. Nov 4 04:57:54.394996 containerd[1635]: time="2025-11-04T04:57:54.394929691Z" level=info msg="StartContainer for \"7443796c4498901cb0d2bf7eedb61419191ee2080bd1af54ee7b601eabea9fcc\" returns successfully" Nov 4 04:57:54.555551 systemd-networkd[1531]: cali6da96c12e3a: Gained IPv6LL Nov 4 04:57:54.619230 kubelet[2827]: E1104 04:57:54.619058 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:54.620842 kubelet[2827]: E1104 04:57:54.620684 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t4jsk" podUID="81cc34c3-6e55-409f-a691-f7248edc74db" Nov 4 04:57:54.620842 kubelet[2827]: E1104 04:57:54.620788 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-vxx5m" podUID="5973885e-fb9f-4950-a4af-55889b504742" Nov 4 04:57:54.657280 kubelet[2827]: I1104 04:57:54.656866 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dnpmk" podStartSLOduration=47.656843302 podStartE2EDuration="47.656843302s" podCreationTimestamp="2025-11-04 04:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:57:54.656232082 +0000 UTC m=+53.337079636" watchObservedRunningTime="2025-11-04 04:57:54.656843302 +0000 UTC m=+53.337690856" Nov 4 04:57:55.621562 kubelet[2827]: E1104 04:57:55.621511 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:55.771311 systemd-networkd[1531]: calif45db7c0d99: Gained IPv6LL Nov 4 04:57:56.623588 kubelet[2827]: E1104 04:57:56.623515 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:57:57.625677 kubelet[2827]: E1104 04:57:57.625597 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:02.404219 systemd[1]: Started sshd@7-10.0.0.39:22-10.0.0.1:46610.service - OpenSSH per-connection server daemon (10.0.0.1:46610). Nov 4 04:58:02.554182 sshd[4978]: Accepted publickey for core from 10.0.0.1 port 46610 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:02.557179 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:02.566780 systemd-logind[1608]: New session 8 of user core. Nov 4 04:58:02.578385 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 04:58:02.956778 sshd[4981]: Connection closed by 10.0.0.1 port 46610 Nov 4 04:58:02.957154 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:02.966188 systemd[1]: sshd@7-10.0.0.39:22-10.0.0.1:46610.service: Deactivated successfully. Nov 4 04:58:02.968593 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 04:58:02.969497 systemd-logind[1608]: Session 8 logged out. Waiting for processes to exit. Nov 4 04:58:02.970991 systemd-logind[1608]: Removed session 8. Nov 4 04:58:03.414769 containerd[1635]: time="2025-11-04T04:58:03.414719709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 04:58:04.072271 containerd[1635]: time="2025-11-04T04:58:04.072204424Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:04.176202 containerd[1635]: time="2025-11-04T04:58:04.176088081Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 04:58:04.176383 containerd[1635]: time="2025-11-04T04:58:04.176135421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:04.176511 kubelet[2827]: E1104 04:58:04.176453 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:58:04.176973 kubelet[2827]: E1104 04:58:04.176537 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:58:04.176973 kubelet[2827]: E1104 04:58:04.176743 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntjpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lwhrn_calico-system(f971ab18-a5fd-481a-b739-b1338118165c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:04.178062 kubelet[2827]: E1104 04:58:04.178009 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lwhrn" podUID="f971ab18-a5fd-481a-b739-b1338118165c" Nov 4 04:58:04.414754 containerd[1635]: time="2025-11-04T04:58:04.414353251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:58:04.783210 containerd[1635]: time="2025-11-04T04:58:04.783088595Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:04.784423 containerd[1635]: time="2025-11-04T04:58:04.784354670Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:58:04.784423 containerd[1635]: time="2025-11-04T04:58:04.784404825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:04.784668 kubelet[2827]: E1104 04:58:04.784614 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:58:04.784738 kubelet[2827]: E1104 04:58:04.784693 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:58:04.784941 kubelet[2827]: E1104 04:58:04.784871 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4dx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56bb65b864-t4njh_calico-system(12098cb7-6382-4a20-b151-e09bfda5e484): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:04.786150 kubelet[2827]: E1104 04:58:04.786093 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56bb65b864-t4njh" podUID="12098cb7-6382-4a20-b151-e09bfda5e484" Nov 4 04:58:05.415297 containerd[1635]: time="2025-11-04T04:58:05.415225738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:58:05.867931 containerd[1635]: time="2025-11-04T04:58:05.867841607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:05.869760 containerd[1635]: time="2025-11-04T04:58:05.869707767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:58:05.869858 containerd[1635]: time="2025-11-04T04:58:05.869768862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:05.870069 kubelet[2827]: E1104 04:58:05.870013 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:58:05.870391 kubelet[2827]: E1104 04:58:05.870084 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:58:05.870486 containerd[1635]: time="2025-11-04T04:58:05.870457424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:58:05.870564 kubelet[2827]: E1104 04:58:05.870455 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b917bdde86d944a1958ebd59231f5dea,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvbmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5879fbbc68-54k5g_calico-system(14be4876-4542-4022-8773-f8e166b995c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:06.275368 containerd[1635]: time="2025-11-04T04:58:06.275282678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:06.276810 containerd[1635]: time="2025-11-04T04:58:06.276751075Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:58:06.277010 containerd[1635]: time="2025-11-04T04:58:06.276798294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:06.277039 kubelet[2827]: E1104 04:58:06.276993 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:58:06.277127 kubelet[2827]: E1104 04:58:06.277053 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:58:06.277528 containerd[1635]: time="2025-11-04T04:58:06.277487196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:58:06.277684 kubelet[2827]: E1104 04:58:06.277488 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnlqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bfb468d79-f8pbq_calico-apiserver(7b46c654-6c31-424f-ab6b-6ce8350f8d0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:06.278676 kubelet[2827]: E1104 04:58:06.278644 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-f8pbq" podUID="7b46c654-6c31-424f-ab6b-6ce8350f8d0d" Nov 4 04:58:06.622605 containerd[1635]: time="2025-11-04T04:58:06.622439521Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:06.623803 containerd[1635]: time="2025-11-04T04:58:06.623741813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:58:06.623876 containerd[1635]: time="2025-11-04T04:58:06.623796417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:06.624035 kubelet[2827]: E1104 04:58:06.623976 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:58:06.624139 kubelet[2827]: E1104 04:58:06.624040 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:58:06.624262 kubelet[2827]: E1104 04:58:06.624208 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvbmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5879fbbc68-54k5g_calico-system(14be4876-4542-4022-8773-f8e166b995c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:06.625427 kubelet[2827]: E1104 04:58:06.625394 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5879fbbc68-54k5g" podUID="14be4876-4542-4022-8773-f8e166b995c8" Nov 4 04:58:07.415065 containerd[1635]: time="2025-11-04T04:58:07.414992288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:58:07.731520 containerd[1635]: time="2025-11-04T04:58:07.731313157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:07.759242 containerd[1635]: time="2025-11-04T04:58:07.759167127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:58:07.759417 containerd[1635]: time="2025-11-04T04:58:07.759195410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:07.759551 kubelet[2827]: E1104 04:58:07.759474 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:58:07.759927 kubelet[2827]: E1104 04:58:07.759555 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:58:07.759927 kubelet[2827]: E1104 04:58:07.759705 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h7zlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bfb468d79-vxx5m_calico-apiserver(5973885e-fb9f-4950-a4af-55889b504742): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:07.760897 kubelet[2827]: E1104 04:58:07.760865 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-vxx5m" podUID="5973885e-fb9f-4950-a4af-55889b504742" Nov 4 04:58:07.974533 systemd[1]: Started sshd@8-10.0.0.39:22-10.0.0.1:33394.service - OpenSSH per-connection server daemon (10.0.0.1:33394). Nov 4 04:58:08.020319 sshd[4998]: Accepted publickey for core from 10.0.0.1 port 33394 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:08.022204 sshd-session[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:08.027492 systemd-logind[1608]: New session 9 of user core. Nov 4 04:58:08.041420 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 04:58:08.172672 sshd[5001]: Connection closed by 10.0.0.1 port 33394 Nov 4 04:58:08.173461 sshd-session[4998]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:08.182699 systemd[1]: sshd@8-10.0.0.39:22-10.0.0.1:33394.service: Deactivated successfully. Nov 4 04:58:08.189221 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 04:58:08.193265 systemd-logind[1608]: Session 9 logged out. Waiting for processes to exit. Nov 4 04:58:08.197952 systemd-logind[1608]: Removed session 9. Nov 4 04:58:10.415206 containerd[1635]: time="2025-11-04T04:58:10.415155656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 04:58:10.816566 containerd[1635]: time="2025-11-04T04:58:10.816487756Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:10.950499 containerd[1635]: time="2025-11-04T04:58:10.950389206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:10.950499 containerd[1635]: time="2025-11-04T04:58:10.950460701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 04:58:10.950844 kubelet[2827]: E1104 04:58:10.950777 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:58:10.950844 kubelet[2827]: E1104 04:58:10.950846 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:58:10.951435 kubelet[2827]: E1104 04:58:10.950983 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mjjtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t4jsk_calico-system(81cc34c3-6e55-409f-a691-f7248edc74db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:10.953761 containerd[1635]: time="2025-11-04T04:58:10.953693300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 04:58:11.383845 containerd[1635]: time="2025-11-04T04:58:11.383783264Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:11.463232 containerd[1635]: time="2025-11-04T04:58:11.463168090Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 04:58:11.463778 containerd[1635]: time="2025-11-04T04:58:11.463262939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:11.463806 kubelet[2827]: E1104 04:58:11.463372 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:58:11.463806 kubelet[2827]: E1104 04:58:11.463427 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:58:11.463806 kubelet[2827]: E1104 04:58:11.463561 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mjjtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t4jsk_calico-system(81cc34c3-6e55-409f-a691-f7248edc74db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:11.465265 kubelet[2827]: E1104 04:58:11.465221 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t4jsk" podUID="81cc34c3-6e55-409f-a691-f7248edc74db" Nov 4 04:58:13.188324 systemd[1]: Started sshd@9-10.0.0.39:22-10.0.0.1:36522.service - OpenSSH per-connection server daemon (10.0.0.1:36522). Nov 4 04:58:13.247537 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 36522 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:13.249499 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:13.255649 systemd-logind[1608]: New session 10 of user core. Nov 4 04:58:13.271446 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 04:58:13.365751 sshd[5027]: Connection closed by 10.0.0.1 port 36522 Nov 4 04:58:13.366314 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:13.374813 systemd[1]: sshd@9-10.0.0.39:22-10.0.0.1:36522.service: Deactivated successfully. Nov 4 04:58:13.377750 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 04:58:13.379756 systemd-logind[1608]: Session 10 logged out. Waiting for processes to exit. Nov 4 04:58:13.381996 systemd-logind[1608]: Removed session 10. Nov 4 04:58:13.421204 kubelet[2827]: E1104 04:58:13.421133 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:16.414468 kubelet[2827]: E1104 04:58:16.414376 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56bb65b864-t4njh" podUID="12098cb7-6382-4a20-b151-e09bfda5e484" Nov 4 04:58:17.415271 kubelet[2827]: E1104 04:58:17.415190 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5879fbbc68-54k5g" podUID="14be4876-4542-4022-8773-f8e166b995c8" Nov 4 04:58:18.391769 systemd[1]: Started sshd@10-10.0.0.39:22-10.0.0.1:36530.service - OpenSSH per-connection server daemon (10.0.0.1:36530). Nov 4 04:58:18.414725 kubelet[2827]: E1104 04:58:18.414648 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lwhrn" podUID="f971ab18-a5fd-481a-b739-b1338118165c" Nov 4 04:58:18.465849 sshd[5044]: Accepted publickey for core from 10.0.0.1 port 36530 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:18.467907 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:18.474153 systemd-logind[1608]: New session 11 of user core. Nov 4 04:58:18.488417 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 04:58:18.602853 sshd[5047]: Connection closed by 10.0.0.1 port 36530 Nov 4 04:58:18.604563 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:18.616278 systemd[1]: sshd@10-10.0.0.39:22-10.0.0.1:36530.service: Deactivated successfully. Nov 4 04:58:18.619553 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 04:58:18.621343 systemd-logind[1608]: Session 11 logged out. Waiting for processes to exit. Nov 4 04:58:18.626306 systemd[1]: Started sshd@11-10.0.0.39:22-10.0.0.1:36534.service - OpenSSH per-connection server daemon (10.0.0.1:36534). Nov 4 04:58:18.628353 systemd-logind[1608]: Removed session 11. Nov 4 04:58:18.703634 sshd[5080]: Accepted publickey for core from 10.0.0.1 port 36534 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:18.705266 kubelet[2827]: E1104 04:58:18.705203 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:18.706239 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:18.713139 systemd-logind[1608]: New session 12 of user core. Nov 4 04:58:18.727146 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 04:58:18.862573 sshd[5090]: Connection closed by 10.0.0.1 port 36534 Nov 4 04:58:18.863306 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:18.879596 systemd[1]: sshd@11-10.0.0.39:22-10.0.0.1:36534.service: Deactivated successfully. Nov 4 04:58:18.885159 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 04:58:18.888180 systemd-logind[1608]: Session 12 logged out. Waiting for processes to exit. Nov 4 04:58:18.892743 systemd[1]: Started sshd@12-10.0.0.39:22-10.0.0.1:36550.service - OpenSSH per-connection server daemon (10.0.0.1:36550). Nov 4 04:58:18.895724 systemd-logind[1608]: Removed session 12. Nov 4 04:58:18.957635 sshd[5101]: Accepted publickey for core from 10.0.0.1 port 36550 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:18.959307 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:18.964658 systemd-logind[1608]: New session 13 of user core. Nov 4 04:58:18.976438 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 04:58:19.073091 sshd[5104]: Connection closed by 10.0.0.1 port 36550 Nov 4 04:58:19.073447 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:19.079236 systemd[1]: sshd@12-10.0.0.39:22-10.0.0.1:36550.service: Deactivated successfully. Nov 4 04:58:19.081693 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 04:58:19.082721 systemd-logind[1608]: Session 13 logged out. Waiting for processes to exit. Nov 4 04:58:19.084220 systemd-logind[1608]: Removed session 13. Nov 4 04:58:19.414298 kubelet[2827]: E1104 04:58:19.414227 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:20.415132 kubelet[2827]: E1104 04:58:20.414983 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-f8pbq" podUID="7b46c654-6c31-424f-ab6b-6ce8350f8d0d" Nov 4 04:58:22.414935 kubelet[2827]: E1104 04:58:22.414864 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-vxx5m" podUID="5973885e-fb9f-4950-a4af-55889b504742" Nov 4 04:58:24.089224 systemd[1]: Started sshd@13-10.0.0.39:22-10.0.0.1:38176.service - OpenSSH per-connection server daemon (10.0.0.1:38176). Nov 4 04:58:24.155891 sshd[5120]: Accepted publickey for core from 10.0.0.1 port 38176 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:24.158115 sshd-session[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:24.164500 systemd-logind[1608]: New session 14 of user core. Nov 4 04:58:24.179418 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 04:58:24.306738 sshd[5123]: Connection closed by 10.0.0.1 port 38176 Nov 4 04:58:24.307645 sshd-session[5120]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:24.319744 systemd[1]: sshd@13-10.0.0.39:22-10.0.0.1:38176.service: Deactivated successfully. Nov 4 04:58:24.322506 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 04:58:24.323744 systemd-logind[1608]: Session 14 logged out. Waiting for processes to exit. Nov 4 04:58:24.325502 systemd-logind[1608]: Removed session 14. Nov 4 04:58:25.415156 kubelet[2827]: E1104 04:58:25.415054 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t4jsk" podUID="81cc34c3-6e55-409f-a691-f7248edc74db" Nov 4 04:58:27.417851 containerd[1635]: time="2025-11-04T04:58:27.417589940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:58:27.757955 containerd[1635]: time="2025-11-04T04:58:27.757860772Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:27.760514 containerd[1635]: time="2025-11-04T04:58:27.760457016Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:58:27.760605 containerd[1635]: time="2025-11-04T04:58:27.760572253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:27.760864 kubelet[2827]: E1104 04:58:27.760794 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:58:27.761342 kubelet[2827]: E1104 04:58:27.760897 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:58:27.761342 kubelet[2827]: E1104 04:58:27.761182 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4dx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56bb65b864-t4njh_calico-system(12098cb7-6382-4a20-b151-e09bfda5e484): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:27.762398 kubelet[2827]: E1104 04:58:27.762346 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56bb65b864-t4njh" podUID="12098cb7-6382-4a20-b151-e09bfda5e484" Nov 4 04:58:29.322599 systemd[1]: Started sshd@14-10.0.0.39:22-10.0.0.1:38190.service - OpenSSH per-connection server daemon (10.0.0.1:38190). Nov 4 04:58:29.438475 sshd[5143]: Accepted publickey for core from 10.0.0.1 port 38190 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:29.441089 sshd-session[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:29.448866 systemd-logind[1608]: New session 15 of user core. Nov 4 04:58:29.457478 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 04:58:29.596773 sshd[5146]: Connection closed by 10.0.0.1 port 38190 Nov 4 04:58:29.597002 sshd-session[5143]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:29.603794 systemd[1]: sshd@14-10.0.0.39:22-10.0.0.1:38190.service: Deactivated successfully. Nov 4 04:58:29.605917 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 04:58:29.606813 systemd-logind[1608]: Session 15 logged out. Waiting for processes to exit. Nov 4 04:58:29.608323 systemd-logind[1608]: Removed session 15. Nov 4 04:58:31.416445 containerd[1635]: time="2025-11-04T04:58:31.416359250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 04:58:31.750798 containerd[1635]: time="2025-11-04T04:58:31.750742256Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:31.752140 containerd[1635]: time="2025-11-04T04:58:31.752090656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 04:58:31.752321 containerd[1635]: time="2025-11-04T04:58:31.752149537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:31.752486 kubelet[2827]: E1104 04:58:31.752423 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:58:31.752959 kubelet[2827]: E1104 04:58:31.752492 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:58:31.752959 kubelet[2827]: E1104 04:58:31.752860 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntjpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lwhrn_calico-system(f971ab18-a5fd-481a-b739-b1338118165c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:31.753372 containerd[1635]: time="2025-11-04T04:58:31.753343446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:58:31.754051 kubelet[2827]: E1104 04:58:31.754011 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lwhrn" podUID="f971ab18-a5fd-481a-b739-b1338118165c" Nov 4 04:58:32.072443 containerd[1635]: time="2025-11-04T04:58:32.072257345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:32.074292 containerd[1635]: time="2025-11-04T04:58:32.074208681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:58:32.074493 containerd[1635]: time="2025-11-04T04:58:32.074211215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:32.074629 kubelet[2827]: E1104 04:58:32.074575 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:58:32.074714 kubelet[2827]: E1104 04:58:32.074647 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:58:32.074857 kubelet[2827]: E1104 04:58:32.074814 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b917bdde86d944a1958ebd59231f5dea,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvbmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5879fbbc68-54k5g_calico-system(14be4876-4542-4022-8773-f8e166b995c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:32.077201 containerd[1635]: time="2025-11-04T04:58:32.077143038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:58:32.418639 containerd[1635]: time="2025-11-04T04:58:32.418409952Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:32.421033 containerd[1635]: time="2025-11-04T04:58:32.420951580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:58:32.421138 containerd[1635]: time="2025-11-04T04:58:32.421057519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:32.421286 kubelet[2827]: E1104 04:58:32.421221 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:58:32.421286 kubelet[2827]: E1104 04:58:32.421283 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:58:32.421898 kubelet[2827]: E1104 04:58:32.421529 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvbmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5879fbbc68-54k5g_calico-system(14be4876-4542-4022-8773-f8e166b995c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:32.422040 containerd[1635]: time="2025-11-04T04:58:32.421672859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:58:32.424264 kubelet[2827]: E1104 04:58:32.424210 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5879fbbc68-54k5g" podUID="14be4876-4542-4022-8773-f8e166b995c8" Nov 4 04:58:32.736150 containerd[1635]: time="2025-11-04T04:58:32.735863465Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:32.737564 containerd[1635]: time="2025-11-04T04:58:32.737491832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:58:32.737641 containerd[1635]: time="2025-11-04T04:58:32.737547327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:32.737898 kubelet[2827]: E1104 04:58:32.737837 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:58:32.737963 kubelet[2827]: E1104 04:58:32.737920 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:58:32.738253 kubelet[2827]: E1104 04:58:32.738195 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnlqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bfb468d79-f8pbq_calico-apiserver(7b46c654-6c31-424f-ab6b-6ce8350f8d0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:32.739445 kubelet[2827]: E1104 04:58:32.739393 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-f8pbq" podUID="7b46c654-6c31-424f-ab6b-6ce8350f8d0d" Nov 4 04:58:33.413371 kubelet[2827]: E1104 04:58:33.413306 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:34.615744 systemd[1]: Started sshd@15-10.0.0.39:22-10.0.0.1:52684.service - OpenSSH per-connection server daemon (10.0.0.1:52684). Nov 4 04:58:34.672878 sshd[5168]: Accepted publickey for core from 10.0.0.1 port 52684 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:34.674772 sshd-session[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:34.679899 systemd-logind[1608]: New session 16 of user core. Nov 4 04:58:34.692285 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 04:58:34.769179 sshd[5171]: Connection closed by 10.0.0.1 port 52684 Nov 4 04:58:34.769543 sshd-session[5168]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:34.774516 systemd[1]: sshd@15-10.0.0.39:22-10.0.0.1:52684.service: Deactivated successfully. Nov 4 04:58:34.776961 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 04:58:34.777975 systemd-logind[1608]: Session 16 logged out. Waiting for processes to exit. Nov 4 04:58:34.779405 systemd-logind[1608]: Removed session 16. Nov 4 04:58:36.414859 containerd[1635]: time="2025-11-04T04:58:36.414813980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:58:36.951403 containerd[1635]: time="2025-11-04T04:58:36.951262850Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:36.954460 containerd[1635]: time="2025-11-04T04:58:36.954322231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:58:36.954543 containerd[1635]: time="2025-11-04T04:58:36.954489396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:36.957906 kubelet[2827]: E1104 04:58:36.957824 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:58:36.957906 kubelet[2827]: E1104 04:58:36.957902 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:58:36.958486 kubelet[2827]: E1104 04:58:36.958080 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h7zlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bfb468d79-vxx5m_calico-apiserver(5973885e-fb9f-4950-a4af-55889b504742): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:36.959319 kubelet[2827]: E1104 04:58:36.959258 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-vxx5m" podUID="5973885e-fb9f-4950-a4af-55889b504742" Nov 4 04:58:38.415451 containerd[1635]: time="2025-11-04T04:58:38.415379140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 04:58:38.802320 containerd[1635]: time="2025-11-04T04:58:38.802249100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:38.842089 containerd[1635]: time="2025-11-04T04:58:38.842002972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:38.842089 containerd[1635]: time="2025-11-04T04:58:38.842047796Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 04:58:38.842482 kubelet[2827]: E1104 04:58:38.842410 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:58:38.842928 kubelet[2827]: E1104 04:58:38.842492 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:58:38.842928 kubelet[2827]: E1104 04:58:38.842677 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mjjtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t4jsk_calico-system(81cc34c3-6e55-409f-a691-f7248edc74db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:38.845331 containerd[1635]: time="2025-11-04T04:58:38.845243733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 04:58:39.232982 containerd[1635]: time="2025-11-04T04:58:39.232807967Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:58:39.260355 containerd[1635]: time="2025-11-04T04:58:39.260244482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 04:58:39.260355 containerd[1635]: time="2025-11-04T04:58:39.260309826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 04:58:39.260713 kubelet[2827]: E1104 04:58:39.260652 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:58:39.260792 kubelet[2827]: E1104 04:58:39.260735 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:58:39.261086 kubelet[2827]: E1104 04:58:39.260974 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mjjtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t4jsk_calico-system(81cc34c3-6e55-409f-a691-f7248edc74db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 04:58:39.262870 kubelet[2827]: E1104 04:58:39.262808 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t4jsk" podUID="81cc34c3-6e55-409f-a691-f7248edc74db" Nov 4 04:58:39.789927 systemd[1]: Started sshd@16-10.0.0.39:22-10.0.0.1:52692.service - OpenSSH per-connection server daemon (10.0.0.1:52692). Nov 4 04:58:39.852973 sshd[5187]: Accepted publickey for core from 10.0.0.1 port 52692 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:39.855040 sshd-session[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:39.861348 systemd-logind[1608]: New session 17 of user core. Nov 4 04:58:39.869443 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 04:58:40.015274 sshd[5190]: Connection closed by 10.0.0.1 port 52692 Nov 4 04:58:40.016504 sshd-session[5187]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:40.024916 systemd[1]: sshd@16-10.0.0.39:22-10.0.0.1:52692.service: Deactivated successfully. Nov 4 04:58:40.027822 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 04:58:40.028813 systemd-logind[1608]: Session 17 logged out. Waiting for processes to exit. Nov 4 04:58:40.031049 systemd-logind[1608]: Removed session 17. Nov 4 04:58:42.414307 kubelet[2827]: E1104 04:58:42.414226 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:42.415149 kubelet[2827]: E1104 04:58:42.414887 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56bb65b864-t4njh" podUID="12098cb7-6382-4a20-b151-e09bfda5e484" Nov 4 04:58:45.034819 systemd[1]: Started sshd@17-10.0.0.39:22-10.0.0.1:51056.service - OpenSSH per-connection server daemon (10.0.0.1:51056). Nov 4 04:58:45.099406 sshd[5203]: Accepted publickey for core from 10.0.0.1 port 51056 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:45.101706 sshd-session[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:45.107183 systemd-logind[1608]: New session 18 of user core. Nov 4 04:58:45.122417 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 04:58:45.216289 sshd[5206]: Connection closed by 10.0.0.1 port 51056 Nov 4 04:58:45.216632 sshd-session[5203]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:45.227596 systemd[1]: sshd@17-10.0.0.39:22-10.0.0.1:51056.service: Deactivated successfully. Nov 4 04:58:45.230000 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 04:58:45.231031 systemd-logind[1608]: Session 18 logged out. Waiting for processes to exit. Nov 4 04:58:45.234208 systemd[1]: Started sshd@18-10.0.0.39:22-10.0.0.1:51066.service - OpenSSH per-connection server daemon (10.0.0.1:51066). Nov 4 04:58:45.235630 systemd-logind[1608]: Removed session 18. Nov 4 04:58:45.308395 sshd[5219]: Accepted publickey for core from 10.0.0.1 port 51066 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:45.311247 sshd-session[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:45.317629 systemd-logind[1608]: New session 19 of user core. Nov 4 04:58:45.325584 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 04:58:45.414908 kubelet[2827]: E1104 04:58:45.414700 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lwhrn" podUID="f971ab18-a5fd-481a-b739-b1338118165c" Nov 4 04:58:45.415695 kubelet[2827]: E1104 04:58:45.415539 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-f8pbq" podUID="7b46c654-6c31-424f-ab6b-6ce8350f8d0d" Nov 4 04:58:45.416730 kubelet[2827]: E1104 04:58:45.416652 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5879fbbc68-54k5g" podUID="14be4876-4542-4022-8773-f8e166b995c8" Nov 4 04:58:46.837651 sshd[5222]: Connection closed by 10.0.0.1 port 51066 Nov 4 04:58:46.837177 sshd-session[5219]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:46.851336 systemd[1]: sshd@18-10.0.0.39:22-10.0.0.1:51066.service: Deactivated successfully. Nov 4 04:58:46.853658 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 04:58:46.854482 systemd-logind[1608]: Session 19 logged out. Waiting for processes to exit. Nov 4 04:58:46.858011 systemd[1]: Started sshd@19-10.0.0.39:22-10.0.0.1:51082.service - OpenSSH per-connection server daemon (10.0.0.1:51082). Nov 4 04:58:46.859108 systemd-logind[1608]: Removed session 19. Nov 4 04:58:46.919408 sshd[5233]: Accepted publickey for core from 10.0.0.1 port 51082 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:46.921289 sshd-session[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:46.926351 systemd-logind[1608]: New session 20 of user core. Nov 4 04:58:46.934274 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 04:58:47.423873 kubelet[2827]: E1104 04:58:47.423806 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:48.415594 kubelet[2827]: E1104 04:58:48.415524 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-vxx5m" podUID="5973885e-fb9f-4950-a4af-55889b504742" Nov 4 04:58:48.697039 sshd[5236]: Connection closed by 10.0.0.1 port 51082 Nov 4 04:58:48.697376 sshd-session[5233]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:48.711655 systemd[1]: sshd@19-10.0.0.39:22-10.0.0.1:51082.service: Deactivated successfully. Nov 4 04:58:48.714232 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 04:58:48.716245 systemd-logind[1608]: Session 20 logged out. Waiting for processes to exit. Nov 4 04:58:48.723521 systemd[1]: Started sshd@20-10.0.0.39:22-10.0.0.1:51088.service - OpenSSH per-connection server daemon (10.0.0.1:51088). Nov 4 04:58:48.726245 systemd-logind[1608]: Removed session 20. Nov 4 04:58:48.827224 sshd[5281]: Accepted publickey for core from 10.0.0.1 port 51088 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:48.829626 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:48.837411 systemd-logind[1608]: New session 21 of user core. Nov 4 04:58:48.850423 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 04:58:49.337493 sshd[5284]: Connection closed by 10.0.0.1 port 51088 Nov 4 04:58:49.337830 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:49.353195 systemd[1]: sshd@20-10.0.0.39:22-10.0.0.1:51088.service: Deactivated successfully. Nov 4 04:58:49.355763 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 04:58:49.357882 systemd-logind[1608]: Session 21 logged out. Waiting for processes to exit. Nov 4 04:58:49.361228 systemd[1]: Started sshd@21-10.0.0.39:22-10.0.0.1:51092.service - OpenSSH per-connection server daemon (10.0.0.1:51092). Nov 4 04:58:49.362707 systemd-logind[1608]: Removed session 21. Nov 4 04:58:49.418673 sshd[5296]: Accepted publickey for core from 10.0.0.1 port 51092 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:49.420902 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:49.425993 systemd-logind[1608]: New session 22 of user core. Nov 4 04:58:49.436303 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 04:58:49.530212 sshd[5299]: Connection closed by 10.0.0.1 port 51092 Nov 4 04:58:49.529840 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:49.537420 systemd[1]: sshd@21-10.0.0.39:22-10.0.0.1:51092.service: Deactivated successfully. Nov 4 04:58:49.540374 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 04:58:49.541504 systemd-logind[1608]: Session 22 logged out. Waiting for processes to exit. Nov 4 04:58:49.543427 systemd-logind[1608]: Removed session 22. Nov 4 04:58:50.434764 update_engine[1611]: I20251104 04:58:50.434664 1611 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 4 04:58:50.434764 update_engine[1611]: I20251104 04:58:50.434746 1611 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 4 04:58:50.436286 update_engine[1611]: I20251104 04:58:50.436252 1611 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 4 04:58:50.436944 update_engine[1611]: I20251104 04:58:50.436913 1611 omaha_request_params.cc:62] Current group set to developer Nov 4 04:58:50.437171 update_engine[1611]: I20251104 04:58:50.437147 1611 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 4 04:58:50.437171 update_engine[1611]: I20251104 04:58:50.437162 1611 update_attempter.cc:643] Scheduling an action processor start. Nov 4 04:58:50.437222 update_engine[1611]: I20251104 04:58:50.437193 1611 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 4 04:58:50.437290 update_engine[1611]: I20251104 04:58:50.437274 1611 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 4 04:58:50.437375 update_engine[1611]: I20251104 04:58:50.437358 1611 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 4 04:58:50.437399 update_engine[1611]: I20251104 04:58:50.437371 1611 omaha_request_action.cc:272] Request: Nov 4 04:58:50.437399 update_engine[1611]: Nov 4 04:58:50.437399 update_engine[1611]: Nov 4 04:58:50.437399 update_engine[1611]: Nov 4 04:58:50.437399 update_engine[1611]: Nov 4 04:58:50.437399 update_engine[1611]: Nov 4 04:58:50.437399 update_engine[1611]: Nov 4 04:58:50.437399 update_engine[1611]: Nov 4 04:58:50.437399 update_engine[1611]: Nov 4 04:58:50.437399 update_engine[1611]: I20251104 04:58:50.437381 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 04:58:50.445698 update_engine[1611]: I20251104 04:58:50.444995 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 04:58:50.445698 update_engine[1611]: I20251104 04:58:50.445602 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 04:58:50.454586 locksmithd[1647]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 4 04:58:50.456916 update_engine[1611]: E20251104 04:58:50.456867 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 04:58:50.457011 update_engine[1611]: I20251104 04:58:50.456987 1611 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 4 04:58:53.416384 kubelet[2827]: E1104 04:58:53.416183 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56bb65b864-t4njh" podUID="12098cb7-6382-4a20-b151-e09bfda5e484" Nov 4 04:58:53.417682 kubelet[2827]: E1104 04:58:53.417274 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t4jsk" podUID="81cc34c3-6e55-409f-a691-f7248edc74db" Nov 4 04:58:54.547245 systemd[1]: Started sshd@22-10.0.0.39:22-10.0.0.1:48238.service - OpenSSH per-connection server daemon (10.0.0.1:48238). Nov 4 04:58:54.603999 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 48238 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:54.605555 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:54.609991 systemd-logind[1608]: New session 23 of user core. Nov 4 04:58:54.618248 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 04:58:54.688424 sshd[5317]: Connection closed by 10.0.0.1 port 48238 Nov 4 04:58:54.688754 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:54.694252 systemd[1]: sshd@22-10.0.0.39:22-10.0.0.1:48238.service: Deactivated successfully. Nov 4 04:58:54.696382 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 04:58:54.697282 systemd-logind[1608]: Session 23 logged out. Waiting for processes to exit. Nov 4 04:58:54.698678 systemd-logind[1608]: Removed session 23. Nov 4 04:58:56.414455 kubelet[2827]: E1104 04:58:56.414385 2827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 04:58:56.415860 kubelet[2827]: E1104 04:58:56.415773 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-f8pbq" podUID="7b46c654-6c31-424f-ab6b-6ce8350f8d0d" Nov 4 04:58:57.419136 kubelet[2827]: E1104 04:58:57.417036 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5879fbbc68-54k5g" podUID="14be4876-4542-4022-8773-f8e166b995c8" Nov 4 04:58:59.704200 systemd[1]: Started sshd@23-10.0.0.39:22-10.0.0.1:48244.service - OpenSSH per-connection server daemon (10.0.0.1:48244). Nov 4 04:58:59.766949 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 48244 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:58:59.768946 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:58:59.773851 systemd-logind[1608]: New session 24 of user core. Nov 4 04:58:59.787332 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 04:58:59.870705 sshd[5336]: Connection closed by 10.0.0.1 port 48244 Nov 4 04:58:59.871071 sshd-session[5333]: pam_unix(sshd:session): session closed for user core Nov 4 04:58:59.876587 systemd[1]: sshd@23-10.0.0.39:22-10.0.0.1:48244.service: Deactivated successfully. Nov 4 04:58:59.879174 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 04:58:59.880014 systemd-logind[1608]: Session 24 logged out. Waiting for processes to exit. Nov 4 04:58:59.881648 systemd-logind[1608]: Removed session 24. Nov 4 04:59:00.389850 update_engine[1611]: I20251104 04:59:00.389725 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 04:59:00.390626 update_engine[1611]: I20251104 04:59:00.389878 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 04:59:00.390626 update_engine[1611]: I20251104 04:59:00.390441 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 04:59:00.414951 update_engine[1611]: E20251104 04:59:00.414855 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 04:59:00.415357 update_engine[1611]: I20251104 04:59:00.415198 1611 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 4 04:59:00.415394 kubelet[2827]: E1104 04:59:00.415005 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lwhrn" podUID="f971ab18-a5fd-481a-b739-b1338118165c" Nov 4 04:59:01.414473 kubelet[2827]: E1104 04:59:01.414256 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-vxx5m" podUID="5973885e-fb9f-4950-a4af-55889b504742" Nov 4 04:59:04.885244 systemd[1]: Started sshd@24-10.0.0.39:22-10.0.0.1:48744.service - OpenSSH per-connection server daemon (10.0.0.1:48744). Nov 4 04:59:04.960126 sshd[5351]: Accepted publickey for core from 10.0.0.1 port 48744 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:59:04.961839 sshd-session[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:04.966636 systemd-logind[1608]: New session 25 of user core. Nov 4 04:59:04.972250 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 04:59:05.065020 sshd[5354]: Connection closed by 10.0.0.1 port 48744 Nov 4 04:59:05.065927 sshd-session[5351]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:05.070972 systemd[1]: sshd@24-10.0.0.39:22-10.0.0.1:48744.service: Deactivated successfully. Nov 4 04:59:05.073384 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 04:59:05.074428 systemd-logind[1608]: Session 25 logged out. Waiting for processes to exit. Nov 4 04:59:05.076217 systemd-logind[1608]: Removed session 25. Nov 4 04:59:07.417136 kubelet[2827]: E1104 04:59:07.416603 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t4jsk" podUID="81cc34c3-6e55-409f-a691-f7248edc74db" Nov 4 04:59:08.415141 containerd[1635]: time="2025-11-04T04:59:08.414401069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:59:08.759820 containerd[1635]: time="2025-11-04T04:59:08.759732102Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:59:08.761222 containerd[1635]: time="2025-11-04T04:59:08.761155148Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:59:08.761317 containerd[1635]: time="2025-11-04T04:59:08.761194461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:59:08.761519 kubelet[2827]: E1104 04:59:08.761472 2827 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:59:08.761938 kubelet[2827]: E1104 04:59:08.761535 2827 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:59:08.761938 kubelet[2827]: E1104 04:59:08.761689 2827 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4dx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56bb65b864-t4njh_calico-system(12098cb7-6382-4a20-b151-e09bfda5e484): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:59:08.763217 kubelet[2827]: E1104 04:59:08.763158 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56bb65b864-t4njh" podUID="12098cb7-6382-4a20-b151-e09bfda5e484" Nov 4 04:59:09.414309 kubelet[2827]: E1104 04:59:09.414241 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bfb468d79-f8pbq" podUID="7b46c654-6c31-424f-ab6b-6ce8350f8d0d" Nov 4 04:59:10.078417 systemd[1]: Started sshd@25-10.0.0.39:22-10.0.0.1:48756.service - OpenSSH per-connection server daemon (10.0.0.1:48756). Nov 4 04:59:10.137801 sshd[5377]: Accepted publickey for core from 10.0.0.1 port 48756 ssh2: RSA SHA256:NM2V9WwNqhgSt7OK5g0xyz1nQq0UunF4qtyYl3w74Uw Nov 4 04:59:10.139626 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:10.145367 systemd-logind[1608]: New session 26 of user core. Nov 4 04:59:10.163362 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 4 04:59:10.267375 sshd[5380]: Connection closed by 10.0.0.1 port 48756 Nov 4 04:59:10.267735 sshd-session[5377]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:10.274340 systemd[1]: sshd@25-10.0.0.39:22-10.0.0.1:48756.service: Deactivated successfully. Nov 4 04:59:10.277289 systemd[1]: session-26.scope: Deactivated successfully. Nov 4 04:59:10.278316 systemd-logind[1608]: Session 26 logged out. Waiting for processes to exit. Nov 4 04:59:10.279928 systemd-logind[1608]: Removed session 26. Nov 4 04:59:10.389537 update_engine[1611]: I20251104 04:59:10.389267 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 04:59:10.389537 update_engine[1611]: I20251104 04:59:10.389416 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 04:59:10.390024 update_engine[1611]: I20251104 04:59:10.389878 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 04:59:10.398561 update_engine[1611]: E20251104 04:59:10.398509 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 04:59:10.398637 update_engine[1611]: I20251104 04:59:10.398589 1611 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 4 04:59:11.416852 kubelet[2827]: E1104 04:59:11.416679 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5879fbbc68-54k5g" podUID="14be4876-4542-4022-8773-f8e166b995c8"