Nov 5 15:48:48.330414 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 15:48:48.330441 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:48:48.330450 kernel: BIOS-provided physical RAM map: Nov 5 15:48:48.330457 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 5 15:48:48.330464 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 5 15:48:48.330473 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Nov 5 15:48:48.330481 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 5 15:48:48.330488 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Nov 5 15:48:48.330494 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 5 15:48:48.330501 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 5 15:48:48.330509 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 5 15:48:48.330516 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 5 15:48:48.330523 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 5 15:48:48.330531 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 5 15:48:48.330540 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 5 15:48:48.330547 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 5 15:48:48.330561 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 5 15:48:48.330574 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 15:48:48.330584 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 15:48:48.330594 kernel: NX (Execute Disable) protection: active Nov 5 15:48:48.330604 kernel: APIC: Static calls initialized Nov 5 15:48:48.330614 kernel: e820: update [mem 0x9a13d018-0x9a146c57] usable ==> usable Nov 5 15:48:48.330624 kernel: e820: update [mem 0x9a100018-0x9a13ce57] usable ==> usable Nov 5 15:48:48.330633 kernel: extended physical RAM map: Nov 5 15:48:48.330643 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 5 15:48:48.330652 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 5 15:48:48.330662 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Nov 5 15:48:48.330672 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 5 15:48:48.330686 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a100017] usable Nov 5 15:48:48.330696 kernel: reserve setup_data: [mem 0x000000009a100018-0x000000009a13ce57] usable Nov 5 15:48:48.330706 kernel: reserve setup_data: [mem 0x000000009a13ce58-0x000000009a13d017] usable Nov 5 15:48:48.330716 kernel: reserve setup_data: [mem 0x000000009a13d018-0x000000009a146c57] usable Nov 5 15:48:48.330726 kernel: reserve setup_data: [mem 0x000000009a146c58-0x000000009b8ecfff] usable Nov 5 15:48:48.330736 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 5 15:48:48.330746 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 5 15:48:48.330756 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 5 15:48:48.330766 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 5 15:48:48.330776 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 5 15:48:48.330788 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 5 15:48:48.330799 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 5 15:48:48.330813 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 5 15:48:48.330824 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 5 15:48:48.330834 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 15:48:48.330860 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 15:48:48.330870 kernel: efi: EFI v2.7 by EDK II Nov 5 15:48:48.330881 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Nov 5 15:48:48.330891 kernel: random: crng init done Nov 5 15:48:48.330901 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Nov 5 15:48:48.330911 kernel: secureboot: Secure boot enabled Nov 5 15:48:48.330922 kernel: SMBIOS 2.8 present. Nov 5 15:48:48.330932 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 5 15:48:48.330942 kernel: DMI: Memory slots populated: 1/1 Nov 5 15:48:48.330956 kernel: Hypervisor detected: KVM Nov 5 15:48:48.330966 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 5 15:48:48.330976 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 15:48:48.331020 kernel: kvm-clock: using sched offset of 6735168424 cycles Nov 5 15:48:48.331038 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 15:48:48.331100 kernel: tsc: Detected 2794.748 MHz processor Nov 5 15:48:48.331115 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 15:48:48.331126 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 15:48:48.331137 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 5 15:48:48.331158 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 5 15:48:48.331169 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 15:48:48.331182 kernel: Using GB pages for direct mapping Nov 5 15:48:48.331193 kernel: ACPI: Early table checksum verification disabled Nov 5 15:48:48.331204 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Nov 5 15:48:48.331215 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 5 15:48:48.331226 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:48:48.331240 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:48:48.331251 kernel: ACPI: FACS 0x000000009BBDD000 000040 Nov 5 15:48:48.331262 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:48:48.331273 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:48:48.331284 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:48:48.331296 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:48:48.331307 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 5 15:48:48.331322 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Nov 5 15:48:48.331333 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Nov 5 15:48:48.331345 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Nov 5 15:48:48.331356 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Nov 5 15:48:48.331367 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Nov 5 15:48:48.331378 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Nov 5 15:48:48.331389 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Nov 5 15:48:48.331404 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Nov 5 15:48:48.331414 kernel: No NUMA configuration found Nov 5 15:48:48.331426 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Nov 5 15:48:48.331437 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Nov 5 15:48:48.331448 kernel: Zone ranges: Nov 5 15:48:48.331459 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 15:48:48.331469 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Nov 5 15:48:48.331480 kernel: Normal empty Nov 5 15:48:48.331495 kernel: Device empty Nov 5 15:48:48.331505 kernel: Movable zone start for each node Nov 5 15:48:48.331516 kernel: Early memory node ranges Nov 5 15:48:48.331527 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Nov 5 15:48:48.331537 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Nov 5 15:48:48.331547 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Nov 5 15:48:48.331558 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Nov 5 15:48:48.331567 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Nov 5 15:48:48.331578 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Nov 5 15:48:48.331586 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 15:48:48.331594 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Nov 5 15:48:48.331602 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 5 15:48:48.331610 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 5 15:48:48.331618 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 5 15:48:48.331626 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Nov 5 15:48:48.331636 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 15:48:48.331644 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 15:48:48.331652 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 15:48:48.331660 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 15:48:48.331673 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 15:48:48.331681 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 15:48:48.331689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 15:48:48.331700 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 15:48:48.331708 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 15:48:48.331716 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 15:48:48.331724 kernel: TSC deadline timer available Nov 5 15:48:48.331732 kernel: CPU topo: Max. logical packages: 1 Nov 5 15:48:48.331740 kernel: CPU topo: Max. logical dies: 1 Nov 5 15:48:48.331757 kernel: CPU topo: Max. dies per package: 1 Nov 5 15:48:48.331765 kernel: CPU topo: Max. threads per core: 1 Nov 5 15:48:48.331773 kernel: CPU topo: Num. cores per package: 4 Nov 5 15:48:48.331781 kernel: CPU topo: Num. threads per package: 4 Nov 5 15:48:48.331791 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 5 15:48:48.331800 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 15:48:48.331808 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 5 15:48:48.331816 kernel: kvm-guest: setup PV sched yield Nov 5 15:48:48.331827 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 5 15:48:48.331835 kernel: Booting paravirtualized kernel on KVM Nov 5 15:48:48.331856 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 15:48:48.331866 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 5 15:48:48.331875 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 5 15:48:48.331883 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 5 15:48:48.331891 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 5 15:48:48.331902 kernel: kvm-guest: PV spinlocks enabled Nov 5 15:48:48.331910 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 15:48:48.331920 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:48:48.331929 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 15:48:48.331937 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 15:48:48.331946 kernel: Fallback order for Node 0: 0 Nov 5 15:48:48.331954 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Nov 5 15:48:48.331964 kernel: Policy zone: DMA32 Nov 5 15:48:48.331973 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:48:48.331981 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 5 15:48:48.331989 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 15:48:48.331998 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 15:48:48.332006 kernel: Dynamic Preempt: voluntary Nov 5 15:48:48.332014 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:48:48.332026 kernel: rcu: RCU event tracing is enabled. Nov 5 15:48:48.332043 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 5 15:48:48.332080 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:48:48.332093 kernel: Rude variant of Tasks RCU enabled. Nov 5 15:48:48.332104 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:48:48.332116 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:48:48.332126 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 5 15:48:48.332138 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 15:48:48.332154 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 15:48:48.332171 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 15:48:48.332183 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 5 15:48:48.332195 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 15:48:48.332207 kernel: Console: colour dummy device 80x25 Nov 5 15:48:48.332219 kernel: printk: legacy console [ttyS0] enabled Nov 5 15:48:48.332231 kernel: ACPI: Core revision 20240827 Nov 5 15:48:48.332247 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 15:48:48.332259 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 15:48:48.332271 kernel: x2apic enabled Nov 5 15:48:48.332283 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 15:48:48.332295 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 5 15:48:48.332307 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 5 15:48:48.332319 kernel: kvm-guest: setup PV IPIs Nov 5 15:48:48.332334 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 15:48:48.332346 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 5 15:48:48.332358 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 5 15:48:48.332370 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 5 15:48:48.332382 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 5 15:48:48.332394 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 5 15:48:48.332406 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 15:48:48.332421 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 15:48:48.332433 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 15:48:48.332446 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 5 15:48:48.332458 kernel: active return thunk: retbleed_return_thunk Nov 5 15:48:48.332470 kernel: RETBleed: Mitigation: untrained return thunk Nov 5 15:48:48.332481 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 15:48:48.332493 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 15:48:48.332507 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 5 15:48:48.332519 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 5 15:48:48.332529 kernel: active return thunk: srso_return_thunk Nov 5 15:48:48.332540 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 5 15:48:48.332552 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 15:48:48.332563 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 15:48:48.332574 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 15:48:48.332590 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 15:48:48.332602 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 5 15:48:48.332615 kernel: Freeing SMP alternatives memory: 32K Nov 5 15:48:48.332627 kernel: pid_max: default: 32768 minimum: 301 Nov 5 15:48:48.332639 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:48:48.332651 kernel: landlock: Up and running. Nov 5 15:48:48.332663 kernel: SELinux: Initializing. Nov 5 15:48:48.332678 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:48:48.332690 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:48:48.332703 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 5 15:48:48.332715 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 5 15:48:48.332726 kernel: ... version: 0 Nov 5 15:48:48.332742 kernel: ... bit width: 48 Nov 5 15:48:48.332755 kernel: ... generic registers: 6 Nov 5 15:48:48.332770 kernel: ... value mask: 0000ffffffffffff Nov 5 15:48:48.332782 kernel: ... max period: 00007fffffffffff Nov 5 15:48:48.332832 kernel: ... fixed-purpose events: 0 Nov 5 15:48:48.332857 kernel: ... event mask: 000000000000003f Nov 5 15:48:48.332870 kernel: signal: max sigframe size: 1776 Nov 5 15:48:48.332923 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:48:48.332935 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:48:48.332951 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 15:48:48.332962 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:48:48.332972 kernel: smpboot: x86: Booting SMP configuration: Nov 5 15:48:48.332983 kernel: .... node #0, CPUs: #1 #2 #3 Nov 5 15:48:48.332994 kernel: smp: Brought up 1 node, 4 CPUs Nov 5 15:48:48.333006 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 5 15:48:48.333018 kernel: Memory: 2431744K/2552216K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114536K reserved, 0K cma-reserved) Nov 5 15:48:48.333032 kernel: devtmpfs: initialized Nov 5 15:48:48.333044 kernel: x86/mm: Memory block size: 128MB Nov 5 15:48:48.333078 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Nov 5 15:48:48.333091 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Nov 5 15:48:48.333103 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:48:48.333115 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 5 15:48:48.333127 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:48:48.333143 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:48:48.333155 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:48:48.333167 kernel: audit: type=2000 audit(1762357726.603:1): state=initialized audit_enabled=0 res=1 Nov 5 15:48:48.333179 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:48:48.333192 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 15:48:48.333204 kernel: cpuidle: using governor menu Nov 5 15:48:48.333216 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:48:48.333230 kernel: dca service started, version 1.12.1 Nov 5 15:48:48.333242 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 5 15:48:48.333255 kernel: PCI: Using configuration type 1 for base access Nov 5 15:48:48.333267 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 15:48:48.333279 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 15:48:48.333291 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 15:48:48.333304 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:48:48.333318 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:48:48.333330 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:48:48.333342 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:48:48.333354 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:48:48.333366 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:48:48.333378 kernel: ACPI: Interpreter enabled Nov 5 15:48:48.333389 kernel: ACPI: PM: (supports S0 S5) Nov 5 15:48:48.333401 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 15:48:48.333416 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 15:48:48.333428 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 15:48:48.333440 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 5 15:48:48.333452 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 15:48:48.333743 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 15:48:48.333981 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 5 15:48:48.334227 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 5 15:48:48.334245 kernel: PCI host bridge to bus 0000:00 Nov 5 15:48:48.334462 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 15:48:48.334663 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 15:48:48.334888 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 15:48:48.335138 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 5 15:48:48.335339 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 5 15:48:48.335555 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 5 15:48:48.335763 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 15:48:48.335999 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 5 15:48:48.336234 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 5 15:48:48.336436 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 5 15:48:48.336633 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 5 15:48:48.336823 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 5 15:48:48.337063 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 15:48:48.337275 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 15:48:48.337475 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 5 15:48:48.337744 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 5 15:48:48.337990 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 5 15:48:48.338256 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:48:48.338478 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 5 15:48:48.338695 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 5 15:48:48.338942 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 5 15:48:48.339249 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 15:48:48.339455 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 5 15:48:48.339688 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 5 15:48:48.339943 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 5 15:48:48.340220 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 5 15:48:48.340455 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 5 15:48:48.340658 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 5 15:48:48.340920 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 5 15:48:48.341180 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 5 15:48:48.341396 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 5 15:48:48.341618 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 5 15:48:48.341823 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 5 15:48:48.341854 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 15:48:48.341867 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 15:48:48.341879 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 15:48:48.341891 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 15:48:48.341904 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 5 15:48:48.341921 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 5 15:48:48.341933 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 5 15:48:48.341945 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 5 15:48:48.341958 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 5 15:48:48.341970 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 5 15:48:48.341983 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 5 15:48:48.341995 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 5 15:48:48.342012 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 5 15:48:48.342025 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 5 15:48:48.342038 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 5 15:48:48.342075 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 5 15:48:48.342088 kernel: iommu: Default domain type: Translated Nov 5 15:48:48.342100 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 15:48:48.342112 kernel: efivars: Registered efivars operations Nov 5 15:48:48.342135 kernel: PCI: Using ACPI for IRQ routing Nov 5 15:48:48.342147 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 15:48:48.342158 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Nov 5 15:48:48.342169 kernel: e820: reserve RAM buffer [mem 0x9a100018-0x9bffffff] Nov 5 15:48:48.342180 kernel: e820: reserve RAM buffer [mem 0x9a13d018-0x9bffffff] Nov 5 15:48:48.342191 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Nov 5 15:48:48.342203 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Nov 5 15:48:48.342418 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 5 15:48:48.342632 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 5 15:48:48.342833 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 15:48:48.342863 kernel: vgaarb: loaded Nov 5 15:48:48.342875 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 15:48:48.342886 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 15:48:48.342897 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 15:48:48.342913 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:48:48.342925 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:48:48.342936 kernel: pnp: PnP ACPI init Nov 5 15:48:48.343197 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 5 15:48:48.343217 kernel: pnp: PnP ACPI: found 6 devices Nov 5 15:48:48.343229 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 15:48:48.343246 kernel: NET: Registered PF_INET protocol family Nov 5 15:48:48.343258 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 15:48:48.343270 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 15:48:48.343281 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:48:48.343294 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 15:48:48.343306 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 15:48:48.343318 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 15:48:48.343333 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:48:48.343345 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:48:48.343357 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:48:48.343368 kernel: NET: Registered PF_XDP protocol family Nov 5 15:48:48.343593 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 5 15:48:48.343777 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 5 15:48:48.343957 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 15:48:48.344180 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 15:48:48.344353 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 15:48:48.344513 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 5 15:48:48.344672 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 5 15:48:48.344831 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 5 15:48:48.344855 kernel: PCI: CLS 0 bytes, default 64 Nov 5 15:48:48.344865 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 5 15:48:48.344879 kernel: Initialise system trusted keyrings Nov 5 15:48:48.344889 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 15:48:48.344898 kernel: Key type asymmetric registered Nov 5 15:48:48.344907 kernel: Asymmetric key parser 'x509' registered Nov 5 15:48:48.344933 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 15:48:48.344945 kernel: io scheduler mq-deadline registered Nov 5 15:48:48.344954 kernel: io scheduler kyber registered Nov 5 15:48:48.344966 kernel: io scheduler bfq registered Nov 5 15:48:48.344975 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 15:48:48.344985 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 5 15:48:48.344995 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 5 15:48:48.345005 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 5 15:48:48.345014 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:48:48.345025 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 15:48:48.345061 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 15:48:48.345082 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 15:48:48.345097 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 15:48:48.345314 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 5 15:48:48.345514 kernel: rtc_cmos 00:04: registered as rtc0 Nov 5 15:48:48.345533 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 15:48:48.345738 kernel: rtc_cmos 00:04: setting system clock to 2025-11-05T15:48:46 UTC (1762357726) Nov 5 15:48:48.345954 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 5 15:48:48.345975 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 5 15:48:48.345987 kernel: efifb: probing for efifb Nov 5 15:48:48.345999 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 5 15:48:48.346011 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 5 15:48:48.346023 kernel: efifb: scrolling: redraw Nov 5 15:48:48.346037 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 5 15:48:48.346067 kernel: Console: switching to colour frame buffer device 160x50 Nov 5 15:48:48.346082 kernel: fb0: EFI VGA frame buffer device Nov 5 15:48:48.346094 kernel: pstore: Using crash dump compression: deflate Nov 5 15:48:48.346106 kernel: pstore: Registered efi_pstore as persistent store backend Nov 5 15:48:48.346120 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:48:48.346132 kernel: Segment Routing with IPv6 Nov 5 15:48:48.346144 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:48:48.346155 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:48:48.346167 kernel: Key type dns_resolver registered Nov 5 15:48:48.346179 kernel: IPI shorthand broadcast: enabled Nov 5 15:48:48.346190 kernel: sched_clock: Marking stable (1291002960, 271989319)->(1618004326, -55012047) Nov 5 15:48:48.346205 kernel: registered taskstats version 1 Nov 5 15:48:48.346216 kernel: Loading compiled-in X.509 certificates Nov 5 15:48:48.346228 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 15:48:48.346239 kernel: Demotion targets for Node 0: null Nov 5 15:48:48.346251 kernel: Key type .fscrypt registered Nov 5 15:48:48.346262 kernel: Key type fscrypt-provisioning registered Nov 5 15:48:48.346273 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:48:48.346288 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:48:48.346300 kernel: ima: No architecture policies found Nov 5 15:48:48.346311 kernel: clk: Disabling unused clocks Nov 5 15:48:48.346324 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 15:48:48.346335 kernel: Write protecting the kernel read-only data: 40960k Nov 5 15:48:48.346346 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 15:48:48.346358 kernel: Run /init as init process Nov 5 15:48:48.346373 kernel: with arguments: Nov 5 15:48:48.346385 kernel: /init Nov 5 15:48:48.346396 kernel: with environment: Nov 5 15:48:48.346408 kernel: HOME=/ Nov 5 15:48:48.346420 kernel: TERM=linux Nov 5 15:48:48.346431 kernel: SCSI subsystem initialized Nov 5 15:48:48.346443 kernel: libata version 3.00 loaded. Nov 5 15:48:48.346647 kernel: ahci 0000:00:1f.2: version 3.0 Nov 5 15:48:48.346661 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 5 15:48:48.346856 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 5 15:48:48.347034 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 5 15:48:48.347247 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 5 15:48:48.347446 kernel: scsi host0: ahci Nov 5 15:48:48.347665 kernel: scsi host1: ahci Nov 5 15:48:48.347926 kernel: scsi host2: ahci Nov 5 15:48:48.348249 kernel: scsi host3: ahci Nov 5 15:48:48.348487 kernel: scsi host4: ahci Nov 5 15:48:48.348726 kernel: scsi host5: ahci Nov 5 15:48:48.348752 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 5 15:48:48.348765 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 5 15:48:48.348778 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 5 15:48:48.348791 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 5 15:48:48.348803 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 5 15:48:48.348815 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 5 15:48:48.348828 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 5 15:48:48.348856 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 5 15:48:48.348869 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 5 15:48:48.348880 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 5 15:48:48.348893 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 5 15:48:48.348905 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 5 15:48:48.348917 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 15:48:48.348930 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 5 15:48:48.348946 kernel: ata3.00: applying bridge limits Nov 5 15:48:48.348959 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 15:48:48.348971 kernel: ata3.00: configured for UDMA/100 Nov 5 15:48:48.349267 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 5 15:48:48.349526 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 5 15:48:48.349757 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 5 15:48:48.349780 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 15:48:48.349792 kernel: GPT:16515071 != 27000831 Nov 5 15:48:48.349804 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 15:48:48.349816 kernel: GPT:16515071 != 27000831 Nov 5 15:48:48.349827 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 15:48:48.349850 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 15:48:48.350134 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 5 15:48:48.350161 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 5 15:48:48.350425 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 5 15:48:48.350444 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:48:48.350453 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:48:48.350463 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:48:48.350472 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 15:48:48.350481 kernel: raid6: avx2x4 gen() 21991 MB/s Nov 5 15:48:48.350495 kernel: raid6: avx2x2 gen() 18133 MB/s Nov 5 15:48:48.350505 kernel: raid6: avx2x1 gen() 15457 MB/s Nov 5 15:48:48.350514 kernel: raid6: using algorithm avx2x4 gen() 21991 MB/s Nov 5 15:48:48.350524 kernel: raid6: .... xor() 6399 MB/s, rmw enabled Nov 5 15:48:48.350533 kernel: raid6: using avx2x2 recovery algorithm Nov 5 15:48:48.350543 kernel: xor: automatically using best checksumming function avx Nov 5 15:48:48.350553 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:48:48.350565 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (182) Nov 5 15:48:48.350574 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 15:48:48.350584 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:48:48.350594 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:48:48.350604 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:48:48.350613 kernel: loop: module loaded Nov 5 15:48:48.350622 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 15:48:48.350634 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:48:48.350643 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:48:48.350656 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:48:48.350666 systemd[1]: Detected virtualization kvm. Nov 5 15:48:48.350675 systemd[1]: Detected architecture x86-64. Nov 5 15:48:48.350684 systemd[1]: Running in initrd. Nov 5 15:48:48.350696 systemd[1]: No hostname configured, using default hostname. Nov 5 15:48:48.350705 systemd[1]: Hostname set to . Nov 5 15:48:48.350714 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:48:48.350724 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:48:48.350734 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:48:48.350743 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:48:48.350755 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:48:48.350765 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:48:48.350775 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:48:48.350785 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:48:48.350795 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:48:48.350805 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:48:48.350817 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:48:48.350827 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:48:48.350836 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:48:48.350859 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:48:48.350869 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:48:48.350879 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:48:48.350888 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:48:48.350901 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:48:48.350910 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:48:48.350919 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:48:48.350929 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:48:48.350938 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:48:48.350947 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:48:48.350957 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:48:48.350968 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 15:48:48.350978 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:48:48.350988 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:48:48.350997 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:48:48.351007 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:48:48.351017 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:48:48.351031 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:48:48.351044 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:48:48.351096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:48:48.351106 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:48:48.351119 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:48:48.351157 systemd-journald[315]: Collecting audit messages is disabled. Nov 5 15:48:48.351179 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:48:48.351192 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:48:48.351202 systemd-journald[315]: Journal started Nov 5 15:48:48.351221 systemd-journald[315]: Runtime Journal (/run/log/journal/9a34f5b20e734020a7c6a5f41212239c) is 5.9M, max 47.9M, 41.9M free. Nov 5 15:48:48.354100 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:48:48.357307 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:48:48.375073 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:48:48.378345 systemd-modules-load[318]: Inserted module 'br_netfilter' Nov 5 15:48:48.379644 kernel: Bridge firewalling registered Nov 5 15:48:48.380940 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:48:48.386767 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:48:48.391125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:48:48.393006 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:48:48.396924 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:48:48.402786 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:48:48.405173 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:48:48.416429 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:48:48.430265 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:48:48.435090 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:48:48.437651 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:48:48.454350 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:48:48.461856 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:48:48.496261 dracut-cmdline[362]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:48:48.533786 systemd-resolved[352]: Positive Trust Anchors: Nov 5 15:48:48.533806 systemd-resolved[352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:48:48.533811 systemd-resolved[352]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:48:48.533867 systemd-resolved[352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:48:48.571775 systemd-resolved[352]: Defaulting to hostname 'linux'. Nov 5 15:48:48.573245 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:48:48.574148 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:48:48.632101 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:48:48.648110 kernel: iscsi: registered transport (tcp) Nov 5 15:48:48.673630 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:48:48.673722 kernel: QLogic iSCSI HBA Driver Nov 5 15:48:48.702444 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:48:48.730581 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:48:48.737163 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:48:48.795658 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:48:48.798544 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:48:48.800956 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:48:48.853380 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:48:48.857794 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:48:48.893840 systemd-udevd[603]: Using default interface naming scheme 'v257'. Nov 5 15:48:48.907454 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:48:48.913195 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:48:48.931786 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:48:48.936739 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:48:48.948368 dracut-pre-trigger[683]: rd.md=0: removing MD RAID activation Nov 5 15:48:48.978769 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:48:48.983736 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:48:49.001223 systemd-networkd[701]: lo: Link UP Nov 5 15:48:49.001233 systemd-networkd[701]: lo: Gained carrier Nov 5 15:48:49.004091 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:48:49.008377 systemd[1]: Reached target network.target - Network. Nov 5 15:48:49.103634 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:48:49.107734 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:48:49.162162 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 15:48:49.195232 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 15:48:49.219097 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 15:48:49.224994 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 15:48:49.244195 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 5 15:48:49.250353 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 15:48:49.257081 kernel: AES CTR mode by8 optimization enabled Nov 5 15:48:49.273614 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:48:49.281782 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:48:49.282042 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:48:49.292941 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:48:49.301630 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:48:49.313713 systemd-networkd[701]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:48:49.319986 disk-uuid[836]: Primary Header is updated. Nov 5 15:48:49.319986 disk-uuid[836]: Secondary Entries is updated. Nov 5 15:48:49.319986 disk-uuid[836]: Secondary Header is updated. Nov 5 15:48:49.313725 systemd-networkd[701]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:48:49.315405 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:48:49.315563 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:48:49.315758 systemd-networkd[701]: eth0: Link UP Nov 5 15:48:49.318122 systemd-networkd[701]: eth0: Gained carrier Nov 5 15:48:49.318137 systemd-networkd[701]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:48:49.351184 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:48:49.369026 systemd-networkd[701]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 15:48:49.425189 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:48:49.428452 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:48:49.433665 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:48:49.440947 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:48:49.443506 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:48:49.447921 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:48:49.554932 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:48:49.577710 systemd-resolved[352]: Detected conflict on linux IN A 10.0.0.50 Nov 5 15:48:49.577734 systemd-resolved[352]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Nov 5 15:48:50.438853 disk-uuid[838]: Warning: The kernel is still using the old partition table. Nov 5 15:48:50.438853 disk-uuid[838]: The new table will be used at the next reboot or after you Nov 5 15:48:50.438853 disk-uuid[838]: run partprobe(8) or kpartx(8) Nov 5 15:48:50.438853 disk-uuid[838]: The operation has completed successfully. Nov 5 15:48:50.460003 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:48:50.460224 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:48:50.470124 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:48:50.530854 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (869) Nov 5 15:48:50.530926 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:48:50.530938 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:48:50.540076 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:48:50.540124 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:48:50.550099 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:48:50.552711 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:48:50.558972 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:48:50.830289 systemd-networkd[701]: eth0: Gained IPv6LL Nov 5 15:48:51.212874 ignition[888]: Ignition 2.22.0 Nov 5 15:48:51.212891 ignition[888]: Stage: fetch-offline Nov 5 15:48:51.212971 ignition[888]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:48:51.212987 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:48:51.213158 ignition[888]: parsed url from cmdline: "" Nov 5 15:48:51.213163 ignition[888]: no config URL provided Nov 5 15:48:51.213171 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:48:51.213187 ignition[888]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:48:51.213248 ignition[888]: op(1): [started] loading QEMU firmware config module Nov 5 15:48:51.213282 ignition[888]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 5 15:48:51.231286 ignition[888]: op(1): [finished] loading QEMU firmware config module Nov 5 15:48:51.325956 ignition[888]: parsing config with SHA512: 3aca183e3ec827b9f5d5ba03b1dc6966cb5abf9042f30a8f03b558b062fad695b43ff7b20f3d6ee23fd892d13de6dde443027a927754d038e2172af7af617b6e Nov 5 15:48:51.332418 unknown[888]: fetched base config from "system" Nov 5 15:48:51.332442 unknown[888]: fetched user config from "qemu" Nov 5 15:48:51.335493 ignition[888]: fetch-offline: fetch-offline passed Nov 5 15:48:51.335564 ignition[888]: Ignition finished successfully Nov 5 15:48:51.340029 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:48:51.345297 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 15:48:51.349509 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:48:51.471245 ignition[898]: Ignition 2.22.0 Nov 5 15:48:51.471265 ignition[898]: Stage: kargs Nov 5 15:48:51.471468 ignition[898]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:48:51.471479 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:48:51.472408 ignition[898]: kargs: kargs passed Nov 5 15:48:51.472475 ignition[898]: Ignition finished successfully Nov 5 15:48:51.484133 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:48:51.488629 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:48:51.621999 ignition[906]: Ignition 2.22.0 Nov 5 15:48:51.622018 ignition[906]: Stage: disks Nov 5 15:48:51.622275 ignition[906]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:48:51.622302 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:48:51.623528 ignition[906]: disks: disks passed Nov 5 15:48:51.623609 ignition[906]: Ignition finished successfully Nov 5 15:48:51.635705 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:48:51.636737 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:48:51.639632 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:48:51.640543 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:48:51.647136 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:48:51.647784 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:48:51.656261 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:48:51.724452 systemd-fsck[916]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 15:48:52.179726 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:48:52.182239 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:48:52.367106 kernel: EXT4-fs (vda9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 15:48:52.368011 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:48:52.369977 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:48:52.374294 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:48:52.377241 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:48:52.379991 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 15:48:52.380043 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:48:52.398068 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (924) Nov 5 15:48:52.398092 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:48:52.398104 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:48:52.398116 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:48:52.380103 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:48:52.401155 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:48:52.387384 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:48:52.399190 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:48:52.404946 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:48:52.462828 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:48:52.469979 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:48:52.477201 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:48:52.482947 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:48:52.590267 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:48:52.592810 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:48:52.595532 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:48:52.617006 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:48:52.620663 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:48:52.635561 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:48:52.747735 ignition[1038]: INFO : Ignition 2.22.0 Nov 5 15:48:52.747735 ignition[1038]: INFO : Stage: mount Nov 5 15:48:52.750423 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:48:52.750423 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:48:52.750423 ignition[1038]: INFO : mount: mount passed Nov 5 15:48:52.750423 ignition[1038]: INFO : Ignition finished successfully Nov 5 15:48:52.759288 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:48:52.762826 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:48:53.370227 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:48:53.397376 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1050) Nov 5 15:48:53.397417 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:48:53.397429 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:48:53.403441 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:48:53.403473 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:48:53.405517 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:48:53.497366 ignition[1067]: INFO : Ignition 2.22.0 Nov 5 15:48:53.497366 ignition[1067]: INFO : Stage: files Nov 5 15:48:53.500506 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:48:53.500506 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:48:53.500506 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:48:53.506879 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:48:53.506879 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:48:53.506879 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:48:53.514082 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:48:53.516950 unknown[1067]: wrote ssh authorized keys file for user: core Nov 5 15:48:53.518807 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:48:53.521219 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:48:53.521219 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 15:48:53.574484 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:48:53.725431 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:48:53.725431 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:48:53.732361 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:48:53.732361 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:48:53.732361 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:48:53.732361 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:48:53.732361 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:48:53.732361 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:48:53.732361 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:48:53.753383 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:48:53.753383 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:48:53.753383 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:48:53.890664 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:48:53.912140 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:48:53.912140 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 5 15:48:54.677114 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 15:48:55.512941 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:48:55.512941 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 15:48:55.525849 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:48:55.537485 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:48:55.537485 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 15:48:55.537485 ignition[1067]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 5 15:48:55.546148 ignition[1067]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 15:48:55.546148 ignition[1067]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 15:48:55.546148 ignition[1067]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 5 15:48:55.546148 ignition[1067]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 15:48:55.629858 ignition[1067]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 15:48:55.639156 ignition[1067]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 15:48:55.641938 ignition[1067]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 15:48:55.641938 ignition[1067]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:48:55.641938 ignition[1067]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:48:55.641938 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:48:55.641938 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:48:55.641938 ignition[1067]: INFO : files: files passed Nov 5 15:48:55.641938 ignition[1067]: INFO : Ignition finished successfully Nov 5 15:48:55.648119 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:48:55.651959 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:48:55.660489 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:48:55.678486 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:48:55.678697 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:48:55.685308 initrd-setup-root-after-ignition[1098]: grep: /sysroot/oem/oem-release: No such file or directory Nov 5 15:48:55.690037 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:48:55.690037 initrd-setup-root-after-ignition[1100]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:48:55.696036 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:48:55.699669 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:48:55.700889 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:48:55.702506 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:48:55.778725 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:48:55.778908 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:48:55.782679 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:48:55.786237 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:48:55.791698 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:48:55.793019 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:48:55.834134 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:48:55.838557 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:48:55.870920 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:48:55.871156 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:48:55.872515 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:48:55.879116 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:48:55.882644 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:48:55.882790 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:48:55.888237 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:48:55.889154 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:48:55.894032 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:48:55.899785 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:48:55.900546 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:48:55.904669 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:48:55.908589 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:48:55.909227 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:48:55.915768 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:48:55.916665 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:48:55.922985 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:48:55.926896 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:48:55.927115 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:48:55.931655 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:48:55.935452 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:48:55.936690 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:48:55.940951 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:48:55.941968 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:48:55.942214 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:48:55.943438 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:48:55.943612 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:48:55.952971 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:48:55.956605 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:48:55.964176 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:48:55.965003 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:48:55.969652 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:48:55.972716 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:48:55.972876 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:48:55.975930 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:48:55.976100 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:48:55.978997 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:48:55.979214 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:48:55.979915 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:48:55.980558 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:48:55.988905 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:48:55.990820 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:48:55.995677 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:48:55.995852 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:48:56.003282 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:48:56.003467 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:48:56.004646 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:48:56.004843 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:48:56.021059 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:48:56.021213 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:48:56.054737 ignition[1126]: INFO : Ignition 2.22.0 Nov 5 15:48:56.054737 ignition[1126]: INFO : Stage: umount Nov 5 15:48:56.058097 ignition[1126]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:48:56.058097 ignition[1126]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:48:56.058097 ignition[1126]: INFO : umount: umount passed Nov 5 15:48:56.058097 ignition[1126]: INFO : Ignition finished successfully Nov 5 15:48:56.059016 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:48:56.060160 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:48:56.060331 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:48:56.061846 systemd[1]: Stopped target network.target - Network. Nov 5 15:48:56.067700 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:48:56.067854 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:48:56.071696 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:48:56.071792 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:48:56.075297 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:48:56.075375 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:48:56.076085 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:48:56.076146 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:48:56.079787 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:48:56.080699 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:48:56.095729 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:48:56.095894 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:48:56.115351 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:48:56.115546 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:48:56.123200 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:48:56.128652 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:48:56.128726 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:48:56.135872 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:48:56.136739 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:48:56.136821 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:48:56.143999 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:48:56.144107 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:48:56.145092 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:48:56.145156 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:48:56.146524 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:48:56.156205 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:48:56.169878 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:48:56.174599 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:48:56.174876 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:48:56.180226 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:48:56.180332 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:48:56.182178 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:48:56.182246 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:48:56.182832 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:48:56.182917 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:48:56.191726 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:48:56.191832 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:48:56.193655 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:48:56.193732 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:48:56.195646 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:48:56.195722 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:48:56.198113 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:48:56.209375 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:48:56.209463 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:48:56.210004 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:48:56.210095 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:48:56.212076 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 15:48:56.212138 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:48:56.223001 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:48:56.223092 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:48:56.224126 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:48:56.224207 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:48:56.248410 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:48:56.248560 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:48:56.273437 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:48:56.273633 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:48:56.276915 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:48:56.286196 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:48:56.324983 systemd[1]: Switching root. Nov 5 15:48:56.422093 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Nov 5 15:48:56.422167 systemd-journald[315]: Journal stopped Nov 5 15:48:58.409784 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:48:58.409862 kernel: SELinux: policy capability open_perms=1 Nov 5 15:48:58.409881 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:48:58.409894 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:48:58.409906 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:48:58.409922 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:48:58.409948 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:48:58.409965 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:48:58.409984 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:48:58.410010 kernel: audit: type=1403 audit(1762357737.328:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:48:58.410031 systemd[1]: Successfully loaded SELinux policy in 76.525ms. Nov 5 15:48:58.410086 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 19.909ms. Nov 5 15:48:58.410105 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:48:58.410127 systemd[1]: Detected virtualization kvm. Nov 5 15:48:58.410144 systemd[1]: Detected architecture x86-64. Nov 5 15:48:58.410160 systemd[1]: Detected first boot. Nov 5 15:48:58.410177 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:48:58.410194 zram_generator::config[1171]: No configuration found. Nov 5 15:48:58.410211 kernel: Guest personality initialized and is inactive Nov 5 15:48:58.410223 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 15:48:58.410238 kernel: Initialized host personality Nov 5 15:48:58.410250 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:48:58.410263 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:48:58.410275 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:48:58.410289 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:48:58.410301 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:48:58.410314 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:48:58.410330 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:48:58.410343 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:48:58.410356 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:48:58.410369 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:48:58.410382 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:48:58.410396 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:48:58.410408 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:48:58.410424 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:48:58.410438 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:48:58.410451 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:48:58.410464 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:48:58.410477 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:48:58.410490 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:48:58.410503 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 15:48:58.410519 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:48:58.410532 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:48:58.410546 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:48:58.410569 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:48:58.410583 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:48:58.410596 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:48:58.410611 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:48:58.410624 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:48:58.410637 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:48:58.410650 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:48:58.410662 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:48:58.410675 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:48:58.410689 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:48:58.410704 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:48:58.410717 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:48:58.410731 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:48:58.410744 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:48:58.410757 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:48:58.410771 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:48:58.410794 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:48:58.410816 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:48:58.410830 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:48:58.410842 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:48:58.410855 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:48:58.410868 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:48:58.410881 systemd[1]: Reached target machines.target - Containers. Nov 5 15:48:58.410894 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:48:58.410909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:48:58.410928 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:48:58.410942 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:48:58.410955 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:48:58.410969 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:48:58.410982 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:48:58.410996 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:48:58.411012 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:48:58.411028 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:48:58.411078 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:48:58.411093 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:48:58.411110 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:48:58.411128 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:48:58.411147 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:48:58.411170 kernel: fuse: init (API version 7.41) Nov 5 15:48:58.411187 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:48:58.411203 kernel: ACPI: bus type drm_connector registered Nov 5 15:48:58.411217 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:48:58.411230 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:48:58.411246 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:48:58.411281 systemd-journald[1235]: Collecting audit messages is disabled. Nov 5 15:48:58.411309 systemd-journald[1235]: Journal started Nov 5 15:48:58.411331 systemd-journald[1235]: Runtime Journal (/run/log/journal/9a34f5b20e734020a7c6a5f41212239c) is 5.9M, max 47.9M, 41.9M free. Nov 5 15:48:57.981429 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:48:58.001466 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 15:48:58.002098 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:48:58.415860 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:48:58.443294 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:48:58.448073 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:48:58.454092 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:48:58.457226 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:48:58.459164 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:48:58.461298 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:48:58.463177 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:48:58.465143 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:48:58.467180 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:48:58.469208 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:48:58.471737 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:48:58.471976 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:48:58.474221 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:48:58.474442 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:48:58.476691 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:48:58.476951 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:48:58.478983 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:48:58.479225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:48:58.482041 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:48:58.482497 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:48:58.484831 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:48:58.485173 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:48:58.487670 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:48:58.490577 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:48:58.496303 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:48:58.501809 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:48:58.519519 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:48:58.522349 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:48:58.526244 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:48:58.532107 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:48:58.535901 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:48:58.535964 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:48:58.538846 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:48:58.541175 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:48:58.543076 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:48:58.548183 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:48:58.550298 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:48:58.552269 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:48:58.554408 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:48:58.556021 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:48:58.558962 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:48:58.564211 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:48:58.579341 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:48:58.582411 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:48:58.585280 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:48:58.609909 systemd-journald[1235]: Time spent on flushing to /var/log/journal/9a34f5b20e734020a7c6a5f41212239c is 23.278ms for 1030 entries. Nov 5 15:48:58.609909 systemd-journald[1235]: System Journal (/var/log/journal/9a34f5b20e734020a7c6a5f41212239c) is 8M, max 163.5M, 155.5M free. Nov 5 15:48:58.720478 systemd-journald[1235]: Received client request to flush runtime journal. Nov 5 15:48:58.720566 kernel: loop1: detected capacity change from 0 to 229808 Nov 5 15:48:58.720615 kernel: loop2: detected capacity change from 0 to 110984 Nov 5 15:48:58.621906 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:48:58.622947 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Nov 5 15:48:58.622966 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Nov 5 15:48:58.629575 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:48:58.690111 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:48:58.693730 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:48:58.696838 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:48:58.706417 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:48:58.710975 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:48:58.725485 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:48:58.738099 kernel: loop3: detected capacity change from 0 to 128048 Nov 5 15:48:58.746573 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:48:58.768792 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:48:58.773145 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:48:58.777149 kernel: loop4: detected capacity change from 0 to 229808 Nov 5 15:48:58.777371 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:48:58.790085 kernel: loop5: detected capacity change from 0 to 110984 Nov 5 15:48:58.799112 kernel: loop6: detected capacity change from 0 to 128048 Nov 5 15:48:58.799210 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:48:58.810476 (sd-merge)[1312]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 5 15:48:58.813841 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Nov 5 15:48:58.813868 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Nov 5 15:48:58.815948 (sd-merge)[1312]: Merged extensions into '/usr'. Nov 5 15:48:58.820412 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:48:58.825560 systemd[1]: Reload requested from client PID 1282 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:48:58.825581 systemd[1]: Reloading... Nov 5 15:48:58.901102 zram_generator::config[1350]: No configuration found. Nov 5 15:48:58.993883 systemd-resolved[1311]: Positive Trust Anchors: Nov 5 15:48:58.993904 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:48:58.993911 systemd-resolved[1311]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:48:58.993957 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:48:59.000143 systemd-resolved[1311]: Defaulting to hostname 'linux'. Nov 5 15:48:59.144556 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:48:59.145140 systemd[1]: Reloading finished in 319 ms. Nov 5 15:48:59.187021 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:48:59.189276 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:48:59.191839 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:48:59.197565 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:48:59.220506 systemd[1]: Starting ensure-sysext.service... Nov 5 15:48:59.224216 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:48:59.245024 systemd[1]: Reload requested from client PID 1385 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:48:59.245228 systemd[1]: Reloading... Nov 5 15:48:59.258658 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:48:59.258720 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:48:59.259232 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:48:59.259542 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:48:59.260571 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:48:59.260935 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Nov 5 15:48:59.261014 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Nov 5 15:48:59.298288 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:48:59.298309 systemd-tmpfiles[1386]: Skipping /boot Nov 5 15:48:59.310987 zram_generator::config[1416]: No configuration found. Nov 5 15:48:59.315255 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:48:59.315274 systemd-tmpfiles[1386]: Skipping /boot Nov 5 15:48:59.567471 systemd[1]: Reloading finished in 321 ms. Nov 5 15:48:59.595797 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:48:59.622587 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:48:59.636272 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:48:59.641915 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:48:59.659770 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:48:59.665345 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:48:59.671556 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:48:59.676277 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:48:59.682701 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:48:59.682964 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:48:59.687291 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:48:59.690683 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:48:59.701442 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:48:59.703596 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:48:59.703995 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:48:59.704152 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:48:59.706466 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:48:59.706761 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:48:59.711278 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:48:59.712131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:48:59.726687 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:48:59.731999 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:48:59.734110 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:48:59.737877 systemd-udevd[1460]: Using default interface naming scheme 'v257'. Nov 5 15:48:59.747350 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:48:59.752917 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:48:59.753286 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:48:59.757377 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:48:59.761898 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:48:59.766935 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:48:59.770759 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:48:59.771008 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:48:59.771238 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:48:59.776501 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:48:59.776830 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:48:59.779187 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:48:59.781826 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:48:59.782080 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:48:59.782260 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:48:59.784266 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:48:59.784578 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:48:59.787530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:48:59.787785 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:48:59.791318 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:48:59.794367 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:48:59.800888 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:48:59.801273 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:48:59.804990 augenrules[1493]: No rules Nov 5 15:48:59.805456 systemd[1]: Finished ensure-sysext.service. Nov 5 15:48:59.808028 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:48:59.808397 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:48:59.817408 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:48:59.817575 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:48:59.820638 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 15:48:59.824161 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:48:59.831141 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:48:59.833666 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:48:59.837123 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:48:59.951182 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 15:48:59.958120 systemd-networkd[1509]: lo: Link UP Nov 5 15:48:59.960615 systemd-networkd[1509]: lo: Gained carrier Nov 5 15:48:59.961484 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:48:59.964831 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:48:59.966949 systemd[1]: Reached target network.target - Network. Nov 5 15:48:59.980541 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:48:59.995852 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:48:59.997205 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 15:49:00.003431 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:49:00.012193 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 15:49:00.025257 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:49:00.036099 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Nov 5 15:49:00.036221 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 15:49:00.039630 systemd-networkd[1509]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:49:00.039637 systemd-networkd[1509]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:49:00.041167 systemd-networkd[1509]: eth0: Link UP Nov 5 15:49:00.041475 systemd-networkd[1509]: eth0: Gained carrier Nov 5 15:49:00.041494 systemd-networkd[1509]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:49:00.044289 kernel: ACPI: button: Power Button [PWRF] Nov 5 15:49:00.059740 systemd-networkd[1509]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 15:49:00.065338 systemd-timesyncd[1503]: Network configuration changed, trying to establish connection. Nov 5 15:49:01.617355 systemd-timesyncd[1503]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 5 15:49:01.617435 systemd-timesyncd[1503]: Initial clock synchronization to Wed 2025-11-05 15:49:01.617249 UTC. Nov 5 15:49:01.617524 systemd-resolved[1311]: Clock change detected. Flushing caches. Nov 5 15:49:01.617643 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:49:01.696724 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 5 15:49:01.700311 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 5 15:49:01.702865 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 15:49:01.829867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:01.867828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:49:01.869405 kernel: kvm_amd: TSC scaling supported Nov 5 15:49:01.869465 kernel: kvm_amd: Nested Virtualization enabled Nov 5 15:49:01.869505 kernel: kvm_amd: Nested Paging enabled Nov 5 15:49:01.869538 kernel: kvm_amd: LBR virtualization supported Nov 5 15:49:01.869578 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 5 15:49:01.869611 kernel: kvm_amd: Virtual GIF supported Nov 5 15:49:01.868188 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:01.884645 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:01.927820 kernel: EDAC MC: Ver: 3.0.0 Nov 5 15:49:01.987378 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:02.114749 ldconfig[1457]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:49:02.126741 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:49:02.136267 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:49:02.173965 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:49:02.177107 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:49:02.179954 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:49:02.183805 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:49:02.186158 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 15:49:02.188519 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:49:02.190732 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:49:02.193320 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:49:02.195787 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:49:02.195840 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:49:02.197658 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:49:02.203853 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:49:02.208367 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:49:02.214703 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:49:02.218134 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:49:02.220492 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:49:02.226511 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:49:02.229161 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:49:02.233210 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:49:02.236676 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:49:02.238792 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:49:02.240815 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:49:02.240853 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:49:02.242463 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:49:02.246465 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:49:02.259377 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:49:02.264237 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:49:02.269815 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:49:02.274266 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:49:02.277830 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 15:49:02.290096 jq[1578]: false Nov 5 15:49:02.290934 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:49:02.295099 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:49:02.299857 extend-filesystems[1579]: Found /dev/vda6 Nov 5 15:49:02.301671 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Refreshing passwd entry cache Nov 5 15:49:02.301709 oslogin_cache_refresh[1580]: Refreshing passwd entry cache Nov 5 15:49:02.304488 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:49:02.307342 extend-filesystems[1579]: Found /dev/vda9 Nov 5 15:49:02.309728 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:49:02.314681 extend-filesystems[1579]: Checking size of /dev/vda9 Nov 5 15:49:02.312985 oslogin_cache_refresh[1580]: Failure getting users, quitting Nov 5 15:49:02.318965 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Failure getting users, quitting Nov 5 15:49:02.318965 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:49:02.318965 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Refreshing group entry cache Nov 5 15:49:02.313010 oslogin_cache_refresh[1580]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:49:02.318895 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:49:02.313091 oslogin_cache_refresh[1580]: Refreshing group entry cache Nov 5 15:49:02.320902 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:49:02.321718 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:49:02.323663 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:49:02.328553 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:49:02.332419 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:49:02.335358 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:49:02.335294 oslogin_cache_refresh[1580]: Failure getting groups, quitting Nov 5 15:49:02.337093 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Failure getting groups, quitting Nov 5 15:49:02.337093 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:49:02.335816 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:49:02.335316 oslogin_cache_refresh[1580]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:49:02.336226 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:49:02.336521 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:49:02.339489 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 15:49:02.340174 extend-filesystems[1579]: Resized partition /dev/vda9 Nov 5 15:49:02.345174 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 15:49:02.348808 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:49:02.349140 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:49:02.354500 extend-filesystems[1606]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:49:02.367714 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 5 15:49:02.368527 jq[1599]: true Nov 5 15:49:02.373193 update_engine[1598]: I20251105 15:49:02.373090 1598 main.cc:92] Flatcar Update Engine starting Nov 5 15:49:02.396552 tar[1604]: linux-amd64/LICENSE Nov 5 15:49:02.403355 (ntainerd)[1622]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:49:02.407654 tar[1604]: linux-amd64/helm Nov 5 15:49:02.410795 jq[1620]: true Nov 5 15:49:02.416675 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 5 15:49:02.455341 dbus-daemon[1576]: [system] SELinux support is enabled Nov 5 15:49:02.455618 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:49:02.461252 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:49:02.461287 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:49:02.468716 update_engine[1598]: I20251105 15:49:02.463856 1598 update_check_scheduler.cc:74] Next update check in 4m15s Nov 5 15:49:02.463723 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:49:02.463755 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:49:02.472618 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:49:02.477503 extend-filesystems[1606]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 15:49:02.477503 extend-filesystems[1606]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 15:49:02.477503 extend-filesystems[1606]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 5 15:49:02.491928 extend-filesystems[1579]: Resized filesystem in /dev/vda9 Nov 5 15:49:02.486188 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:49:02.491277 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:49:02.493812 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:49:02.502557 systemd-logind[1596]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 15:49:02.502997 systemd-logind[1596]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 15:49:02.505921 systemd-logind[1596]: New seat seat0. Nov 5 15:49:02.507703 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:49:02.555945 bash[1643]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:49:02.559666 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:49:02.570363 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 15:49:02.590954 locksmithd[1645]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:49:02.647951 sshd_keygen[1613]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:49:02.694665 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:49:02.699581 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:49:02.723301 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:49:02.723587 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:49:02.728884 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:49:03.237664 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:49:03.243071 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:49:03.246897 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 15:49:03.250844 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:49:03.286419 containerd[1622]: time="2025-11-05T15:49:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:49:03.287703 containerd[1622]: time="2025-11-05T15:49:03.287616301Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:49:03.304822 containerd[1622]: time="2025-11-05T15:49:03.304732891Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.807µs" Nov 5 15:49:03.304822 containerd[1622]: time="2025-11-05T15:49:03.304794256Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:49:03.304822 containerd[1622]: time="2025-11-05T15:49:03.304819333Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:49:03.305177 containerd[1622]: time="2025-11-05T15:49:03.305144112Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:49:03.305177 containerd[1622]: time="2025-11-05T15:49:03.305173928Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:49:03.305230 containerd[1622]: time="2025-11-05T15:49:03.305215596Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:49:03.305360 containerd[1622]: time="2025-11-05T15:49:03.305318870Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:49:03.305360 containerd[1622]: time="2025-11-05T15:49:03.305350850Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:49:03.305829 containerd[1622]: time="2025-11-05T15:49:03.305765527Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:49:03.305829 containerd[1622]: time="2025-11-05T15:49:03.305813457Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:49:03.305888 containerd[1622]: time="2025-11-05T15:49:03.305853943Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:49:03.305888 containerd[1622]: time="2025-11-05T15:49:03.305869242Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:49:03.306161 containerd[1622]: time="2025-11-05T15:49:03.306127767Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:49:03.306524 containerd[1622]: time="2025-11-05T15:49:03.306486900Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:49:03.306583 containerd[1622]: time="2025-11-05T15:49:03.306560168Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:49:03.306605 containerd[1622]: time="2025-11-05T15:49:03.306580376Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:49:03.306663 containerd[1622]: time="2025-11-05T15:49:03.306626211Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:49:03.307367 containerd[1622]: time="2025-11-05T15:49:03.307267083Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:49:03.307425 containerd[1622]: time="2025-11-05T15:49:03.307403890Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:49:03.315211 containerd[1622]: time="2025-11-05T15:49:03.315129739Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:49:03.315211 containerd[1622]: time="2025-11-05T15:49:03.315220218Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:49:03.315359 containerd[1622]: time="2025-11-05T15:49:03.315238913Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:49:03.315359 containerd[1622]: time="2025-11-05T15:49:03.315286463Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:49:03.315359 containerd[1622]: time="2025-11-05T15:49:03.315307141Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:49:03.315359 containerd[1622]: time="2025-11-05T15:49:03.315320887Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:49:03.315359 containerd[1622]: time="2025-11-05T15:49:03.315358498Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:49:03.315507 containerd[1622]: time="2025-11-05T15:49:03.315378846Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:49:03.315507 containerd[1622]: time="2025-11-05T15:49:03.315399054Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:49:03.315507 containerd[1622]: time="2025-11-05T15:49:03.315418129Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:49:03.315507 containerd[1622]: time="2025-11-05T15:49:03.315431404Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:49:03.315507 containerd[1622]: time="2025-11-05T15:49:03.315450330Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:49:03.315785 containerd[1622]: time="2025-11-05T15:49:03.315740213Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:49:03.315831 containerd[1622]: time="2025-11-05T15:49:03.315784196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:49:03.315831 containerd[1622]: time="2025-11-05T15:49:03.315808361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:49:03.315887 containerd[1622]: time="2025-11-05T15:49:03.315851632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:49:03.315887 containerd[1622]: time="2025-11-05T15:49:03.315879344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:49:03.315937 containerd[1622]: time="2025-11-05T15:49:03.315897919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:49:03.315937 containerd[1622]: time="2025-11-05T15:49:03.315913228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:49:03.315937 containerd[1622]: time="2025-11-05T15:49:03.315929679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:49:03.316017 containerd[1622]: time="2025-11-05T15:49:03.315944457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:49:03.316017 containerd[1622]: time="2025-11-05T15:49:03.315960807Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:49:03.316017 containerd[1622]: time="2025-11-05T15:49:03.315978280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:49:03.316127 containerd[1622]: time="2025-11-05T15:49:03.316102543Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:49:03.316156 containerd[1622]: time="2025-11-05T15:49:03.316129133Z" level=info msg="Start snapshots syncer" Nov 5 15:49:03.316181 containerd[1622]: time="2025-11-05T15:49:03.316168216Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:49:03.316833 containerd[1622]: time="2025-11-05T15:49:03.316544702Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:49:03.317653 containerd[1622]: time="2025-11-05T15:49:03.317396910Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:49:03.321008 containerd[1622]: time="2025-11-05T15:49:03.320939334Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:49:03.321521 containerd[1622]: time="2025-11-05T15:49:03.321479537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:49:03.321608 containerd[1622]: time="2025-11-05T15:49:03.321593340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:49:03.321697 containerd[1622]: time="2025-11-05T15:49:03.321678360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:49:03.321784 containerd[1622]: time="2025-11-05T15:49:03.321770052Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:49:03.321863 containerd[1622]: time="2025-11-05T15:49:03.321847337Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:49:03.322011 containerd[1622]: time="2025-11-05T15:49:03.321934951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:49:03.322011 containerd[1622]: time="2025-11-05T15:49:03.321959858Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:49:03.322180 containerd[1622]: time="2025-11-05T15:49:03.322133002Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:49:03.322253 containerd[1622]: time="2025-11-05T15:49:03.322239272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:49:03.322398 containerd[1622]: time="2025-11-05T15:49:03.322321456Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:49:03.322485 containerd[1622]: time="2025-11-05T15:49:03.322460777Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:49:03.322593 containerd[1622]: time="2025-11-05T15:49:03.322567998Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:49:03.322679 containerd[1622]: time="2025-11-05T15:49:03.322665551Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:49:03.322816 containerd[1622]: time="2025-11-05T15:49:03.322747254Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:49:03.322816 containerd[1622]: time="2025-11-05T15:49:03.322780897Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:49:03.322943 containerd[1622]: time="2025-11-05T15:49:03.322904349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:49:03.324705 containerd[1622]: time="2025-11-05T15:49:03.322978318Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:49:03.324705 containerd[1622]: time="2025-11-05T15:49:03.323010989Z" level=info msg="runtime interface created" Nov 5 15:49:03.324705 containerd[1622]: time="2025-11-05T15:49:03.323018252Z" level=info msg="created NRI interface" Nov 5 15:49:03.324705 containerd[1622]: time="2025-11-05T15:49:03.323030355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:49:03.324705 containerd[1622]: time="2025-11-05T15:49:03.323060672Z" level=info msg="Connect containerd service" Nov 5 15:49:03.324705 containerd[1622]: time="2025-11-05T15:49:03.323100867Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:49:03.324705 containerd[1622]: time="2025-11-05T15:49:03.324253108Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:49:03.440004 systemd-networkd[1509]: eth0: Gained IPv6LL Nov 5 15:49:03.454053 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:49:03.457400 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:49:03.461741 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 5 15:49:03.467657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:49:03.472992 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:49:03.615844 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 15:49:03.616196 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 5 15:49:03.619297 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:49:03.767648 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:49:03.970854 containerd[1622]: time="2025-11-05T15:49:03.970364398Z" level=info msg="Start subscribing containerd event" Nov 5 15:49:03.971021 containerd[1622]: time="2025-11-05T15:49:03.970443826Z" level=info msg="Start recovering state" Nov 5 15:49:03.971100 containerd[1622]: time="2025-11-05T15:49:03.970718582Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:49:03.971178 containerd[1622]: time="2025-11-05T15:49:03.971163065Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:49:03.971791 containerd[1622]: time="2025-11-05T15:49:03.971772458Z" level=info msg="Start event monitor" Nov 5 15:49:03.971845 containerd[1622]: time="2025-11-05T15:49:03.971793788Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:49:03.971845 containerd[1622]: time="2025-11-05T15:49:03.971800851Z" level=info msg="Start streaming server" Nov 5 15:49:03.971845 containerd[1622]: time="2025-11-05T15:49:03.971819837Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:49:03.971845 containerd[1622]: time="2025-11-05T15:49:03.971830617Z" level=info msg="runtime interface starting up..." Nov 5 15:49:03.971845 containerd[1622]: time="2025-11-05T15:49:03.971838482Z" level=info msg="starting plugins..." Nov 5 15:49:03.971965 containerd[1622]: time="2025-11-05T15:49:03.971855744Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:49:03.972163 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:49:03.975180 containerd[1622]: time="2025-11-05T15:49:03.975133752Z" level=info msg="containerd successfully booted in 0.689371s" Nov 5 15:49:04.164324 tar[1604]: linux-amd64/README.md Nov 5 15:49:04.197873 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:49:05.951033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:49:05.953937 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:49:05.956134 systemd[1]: Startup finished in 2.610s (kernel) + 9.397s (initrd) + 7.152s (userspace) = 19.160s. Nov 5 15:49:05.966379 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:49:07.110838 kubelet[1718]: E1105 15:49:07.110699 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:49:07.116116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:49:07.116392 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:49:07.116917 systemd[1]: kubelet.service: Consumed 2.970s CPU time, 265.7M memory peak. Nov 5 15:49:11.963204 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:49:11.964577 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:39258.service - OpenSSH per-connection server daemon (10.0.0.1:39258). Nov 5 15:49:12.048289 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 39258 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:49:12.050313 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:12.057209 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:49:12.058429 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:49:12.065355 systemd-logind[1596]: New session 1 of user core. Nov 5 15:49:12.084886 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:49:12.087936 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:49:12.106238 (systemd)[1736]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:49:12.108864 systemd-logind[1596]: New session c1 of user core. Nov 5 15:49:12.332055 systemd[1736]: Queued start job for default target default.target. Nov 5 15:49:12.349525 systemd[1736]: Created slice app.slice - User Application Slice. Nov 5 15:49:12.349562 systemd[1736]: Reached target paths.target - Paths. Nov 5 15:49:12.349619 systemd[1736]: Reached target timers.target - Timers. Nov 5 15:49:12.351446 systemd[1736]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:49:12.365384 systemd[1736]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:49:12.365545 systemd[1736]: Reached target sockets.target - Sockets. Nov 5 15:49:12.365587 systemd[1736]: Reached target basic.target - Basic System. Nov 5 15:49:12.365662 systemd[1736]: Reached target default.target - Main User Target. Nov 5 15:49:12.365706 systemd[1736]: Startup finished in 249ms. Nov 5 15:49:12.365986 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:49:12.367813 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:49:12.440548 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:39264.service - OpenSSH per-connection server daemon (10.0.0.1:39264). Nov 5 15:49:12.518417 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 39264 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:49:12.520196 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:12.525226 systemd-logind[1596]: New session 2 of user core. Nov 5 15:49:12.535891 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:49:12.592211 sshd[1750]: Connection closed by 10.0.0.1 port 39264 Nov 5 15:49:12.592426 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:12.608069 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:39264.service: Deactivated successfully. Nov 5 15:49:12.610189 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 15:49:12.611089 systemd-logind[1596]: Session 2 logged out. Waiting for processes to exit. Nov 5 15:49:12.613737 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:39266.service - OpenSSH per-connection server daemon (10.0.0.1:39266). Nov 5 15:49:12.614533 systemd-logind[1596]: Removed session 2. Nov 5 15:49:12.675313 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 39266 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:49:12.677306 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:12.683524 systemd-logind[1596]: New session 3 of user core. Nov 5 15:49:12.697885 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:49:12.749893 sshd[1759]: Connection closed by 10.0.0.1 port 39266 Nov 5 15:49:12.750282 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:12.759291 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:39266.service: Deactivated successfully. Nov 5 15:49:12.761075 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 15:49:12.761941 systemd-logind[1596]: Session 3 logged out. Waiting for processes to exit. Nov 5 15:49:12.764920 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:39270.service - OpenSSH per-connection server daemon (10.0.0.1:39270). Nov 5 15:49:12.765624 systemd-logind[1596]: Removed session 3. Nov 5 15:49:12.823598 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 39270 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:49:12.825222 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:12.830809 systemd-logind[1596]: New session 4 of user core. Nov 5 15:49:12.840810 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:49:12.898856 sshd[1768]: Connection closed by 10.0.0.1 port 39270 Nov 5 15:49:12.899115 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:12.913974 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:39270.service: Deactivated successfully. Nov 5 15:49:12.916041 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:49:12.916987 systemd-logind[1596]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:49:12.920464 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:39286.service - OpenSSH per-connection server daemon (10.0.0.1:39286). Nov 5 15:49:12.921342 systemd-logind[1596]: Removed session 4. Nov 5 15:49:12.978076 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 39286 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:49:12.979554 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:12.985411 systemd-logind[1596]: New session 5 of user core. Nov 5 15:49:12.994912 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:49:13.061889 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:49:13.062307 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:49:13.083322 sudo[1779]: pam_unix(sudo:session): session closed for user root Nov 5 15:49:13.086351 sshd[1778]: Connection closed by 10.0.0.1 port 39286 Nov 5 15:49:13.086811 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:13.102471 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:39286.service: Deactivated successfully. Nov 5 15:49:13.104555 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:49:13.105566 systemd-logind[1596]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:49:13.108589 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:39300.service - OpenSSH per-connection server daemon (10.0.0.1:39300). Nov 5 15:49:13.109867 systemd-logind[1596]: Removed session 5. Nov 5 15:49:13.165121 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 39300 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:49:13.166948 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:13.173171 systemd-logind[1596]: New session 6 of user core. Nov 5 15:49:13.184095 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:49:13.243048 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:49:13.243416 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:49:13.251695 sudo[1790]: pam_unix(sudo:session): session closed for user root Nov 5 15:49:13.261901 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:49:13.262451 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:49:13.279353 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:49:13.333209 augenrules[1812]: No rules Nov 5 15:49:13.334964 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:49:13.335268 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:49:13.336576 sudo[1789]: pam_unix(sudo:session): session closed for user root Nov 5 15:49:13.338596 sshd[1788]: Connection closed by 10.0.0.1 port 39300 Nov 5 15:49:13.339009 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:13.349601 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:39300.service: Deactivated successfully. Nov 5 15:49:13.351414 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:49:13.352270 systemd-logind[1596]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:49:13.355117 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:39312.service - OpenSSH per-connection server daemon (10.0.0.1:39312). Nov 5 15:49:13.355817 systemd-logind[1596]: Removed session 6. Nov 5 15:49:13.429974 sshd[1821]: Accepted publickey for core from 10.0.0.1 port 39312 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:49:13.432154 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:13.438692 systemd-logind[1596]: New session 7 of user core. Nov 5 15:49:13.448989 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:49:13.509977 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:49:13.510347 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:49:15.172735 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:49:15.192127 (dockerd)[1845]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:49:15.571257 dockerd[1845]: time="2025-11-05T15:49:15.571077906Z" level=info msg="Starting up" Nov 5 15:49:15.572201 dockerd[1845]: time="2025-11-05T15:49:15.572126332Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:49:15.586667 dockerd[1845]: time="2025-11-05T15:49:15.586587663Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:49:16.620487 dockerd[1845]: time="2025-11-05T15:49:16.620387737Z" level=info msg="Loading containers: start." Nov 5 15:49:16.816712 kernel: Initializing XFRM netlink socket Nov 5 15:49:17.366997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:49:17.370037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:49:17.471163 systemd-networkd[1509]: docker0: Link UP Nov 5 15:49:17.688187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:49:17.717214 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:49:17.846543 dockerd[1845]: time="2025-11-05T15:49:17.846461232Z" level=info msg="Loading containers: done." Nov 5 15:49:17.887706 dockerd[1845]: time="2025-11-05T15:49:17.887266847Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:49:17.887706 dockerd[1845]: time="2025-11-05T15:49:17.887441595Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:49:17.887706 dockerd[1845]: time="2025-11-05T15:49:17.887704428Z" level=info msg="Initializing buildkit" Nov 5 15:49:17.902061 kubelet[2030]: E1105 15:49:17.901982 2030 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:49:17.910351 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:49:17.910591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:49:17.911083 systemd[1]: kubelet.service: Consumed 350ms CPU time, 111.1M memory peak. Nov 5 15:49:17.938869 dockerd[1845]: time="2025-11-05T15:49:17.938720982Z" level=info msg="Completed buildkit initialization" Nov 5 15:49:17.946665 dockerd[1845]: time="2025-11-05T15:49:17.946585190Z" level=info msg="Daemon has completed initialization" Nov 5 15:49:17.946782 dockerd[1845]: time="2025-11-05T15:49:17.946695908Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:49:17.947019 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:49:19.593332 containerd[1622]: time="2025-11-05T15:49:19.593281524Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 15:49:20.551678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount936237772.mount: Deactivated successfully. Nov 5 15:49:22.503852 containerd[1622]: time="2025-11-05T15:49:22.503763967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:22.506811 containerd[1622]: time="2025-11-05T15:49:22.506767530Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 5 15:49:22.509820 containerd[1622]: time="2025-11-05T15:49:22.509766424Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:22.514077 containerd[1622]: time="2025-11-05T15:49:22.514024229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:22.515043 containerd[1622]: time="2025-11-05T15:49:22.514994318Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.921668451s" Nov 5 15:49:22.515111 containerd[1622]: time="2025-11-05T15:49:22.515037128Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 5 15:49:22.515690 containerd[1622]: time="2025-11-05T15:49:22.515624199Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 15:49:25.917881 containerd[1622]: time="2025-11-05T15:49:25.917803500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:25.939501 containerd[1622]: time="2025-11-05T15:49:25.939422253Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 5 15:49:25.951972 containerd[1622]: time="2025-11-05T15:49:25.951887981Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:25.987664 containerd[1622]: time="2025-11-05T15:49:25.987559097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:25.988923 containerd[1622]: time="2025-11-05T15:49:25.988868081Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 3.473179331s" Nov 5 15:49:25.988923 containerd[1622]: time="2025-11-05T15:49:25.988906473Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 5 15:49:25.990653 containerd[1622]: time="2025-11-05T15:49:25.989896340Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 15:49:27.996161 containerd[1622]: time="2025-11-05T15:49:27.996074290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:28.018851 containerd[1622]: time="2025-11-05T15:49:28.018746428Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 5 15:49:28.024353 containerd[1622]: time="2025-11-05T15:49:28.024269365Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:28.061173 containerd[1622]: time="2025-11-05T15:49:28.061069038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:28.062690 containerd[1622]: time="2025-11-05T15:49:28.062605620Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.072675927s" Nov 5 15:49:28.062690 containerd[1622]: time="2025-11-05T15:49:28.062684217Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 5 15:49:28.063235 containerd[1622]: time="2025-11-05T15:49:28.063197500Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 15:49:28.161410 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 15:49:28.163687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:49:28.425728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:49:28.431799 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:49:28.591622 kubelet[2155]: E1105 15:49:28.591477 2155 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:49:28.596336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:49:28.596559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:49:28.597190 systemd[1]: kubelet.service: Consumed 362ms CPU time, 110.6M memory peak. Nov 5 15:49:32.840939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3794637361.mount: Deactivated successfully. Nov 5 15:49:33.795838 containerd[1622]: time="2025-11-05T15:49:33.795101135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:33.797384 containerd[1622]: time="2025-11-05T15:49:33.797341587Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 5 15:49:33.799129 containerd[1622]: time="2025-11-05T15:49:33.799073074Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:33.802819 containerd[1622]: time="2025-11-05T15:49:33.802754588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:33.803564 containerd[1622]: time="2025-11-05T15:49:33.803504634Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 5.740261279s" Nov 5 15:49:33.803564 containerd[1622]: time="2025-11-05T15:49:33.803550771Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 5 15:49:33.804372 containerd[1622]: time="2025-11-05T15:49:33.804301288Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 15:49:34.376714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3409186397.mount: Deactivated successfully. Nov 5 15:49:37.109975 containerd[1622]: time="2025-11-05T15:49:37.109886150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:37.110609 containerd[1622]: time="2025-11-05T15:49:37.110568567Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 5 15:49:37.183988 containerd[1622]: time="2025-11-05T15:49:37.183895344Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:37.240231 containerd[1622]: time="2025-11-05T15:49:37.240131959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:37.241653 containerd[1622]: time="2025-11-05T15:49:37.241585622Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.437241793s" Nov 5 15:49:37.241734 containerd[1622]: time="2025-11-05T15:49:37.241655656Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 5 15:49:37.242378 containerd[1622]: time="2025-11-05T15:49:37.242321630Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 15:49:37.721062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350535396.mount: Deactivated successfully. Nov 5 15:49:37.727884 containerd[1622]: time="2025-11-05T15:49:37.727835394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:49:37.728749 containerd[1622]: time="2025-11-05T15:49:37.728700800Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 15:49:37.730304 containerd[1622]: time="2025-11-05T15:49:37.730250346Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:49:37.732443 containerd[1622]: time="2025-11-05T15:49:37.732398367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:49:37.733220 containerd[1622]: time="2025-11-05T15:49:37.733187188Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 490.832865ms" Nov 5 15:49:37.733309 containerd[1622]: time="2025-11-05T15:49:37.733223487Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 15:49:37.734095 containerd[1622]: time="2025-11-05T15:49:37.733789851Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 15:49:38.363092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78827400.mount: Deactivated successfully. Nov 5 15:49:38.782472 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 5 15:49:38.786131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:49:39.311869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:49:39.340151 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:49:39.838970 kubelet[2248]: E1105 15:49:39.838893 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:49:39.843539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:49:39.843774 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:49:39.844257 systemd[1]: kubelet.service: Consumed 322ms CPU time, 111.6M memory peak. Nov 5 15:49:43.948657 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1096668186 wd_nsec: 1096667743 Nov 5 15:49:45.975600 containerd[1622]: time="2025-11-05T15:49:45.975530732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:45.976456 containerd[1622]: time="2025-11-05T15:49:45.976425591Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 5 15:49:45.977716 containerd[1622]: time="2025-11-05T15:49:45.977683388Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:45.980477 containerd[1622]: time="2025-11-05T15:49:45.980420434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:45.981599 containerd[1622]: time="2025-11-05T15:49:45.981554798Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 8.247720161s" Nov 5 15:49:45.981599 containerd[1622]: time="2025-11-05T15:49:45.981596166Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 5 15:49:47.805094 update_engine[1598]: I20251105 15:49:47.804970 1598 update_attempter.cc:509] Updating boot flags... Nov 5 15:49:49.104974 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:49:49.105230 systemd[1]: kubelet.service: Consumed 322ms CPU time, 111.6M memory peak. Nov 5 15:49:49.108412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:49:49.823732 systemd[1]: Reload requested from client PID 2347 ('systemctl') (unit session-7.scope)... Nov 5 15:49:49.823751 systemd[1]: Reloading... Nov 5 15:49:49.959676 zram_generator::config[2391]: No configuration found. Nov 5 15:49:51.687211 systemd[1]: Reloading finished in 1863 ms. Nov 5 15:49:51.758516 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:49:51.758678 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:49:51.759084 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:49:51.759150 systemd[1]: kubelet.service: Consumed 167ms CPU time, 98.4M memory peak. Nov 5 15:49:51.761144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:49:51.969768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:49:51.998106 (kubelet)[2439]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:49:52.062479 kubelet[2439]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:49:52.062479 kubelet[2439]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:49:52.062479 kubelet[2439]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:49:52.062935 kubelet[2439]: I1105 15:49:52.062537 2439 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:49:52.893662 kubelet[2439]: I1105 15:49:52.893572 2439 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 15:49:52.893662 kubelet[2439]: I1105 15:49:52.893616 2439 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:49:52.893934 kubelet[2439]: I1105 15:49:52.893904 2439 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:49:52.943974 kubelet[2439]: I1105 15:49:52.943928 2439 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:49:52.944419 kubelet[2439]: E1105 15:49:52.944366 2439 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:49:52.978207 kubelet[2439]: I1105 15:49:52.978151 2439 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:49:52.985434 kubelet[2439]: I1105 15:49:52.985376 2439 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:49:52.985753 kubelet[2439]: I1105 15:49:52.985704 2439 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:49:52.985980 kubelet[2439]: I1105 15:49:52.985740 2439 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:49:52.985980 kubelet[2439]: I1105 15:49:52.985975 2439 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:49:52.986149 kubelet[2439]: I1105 15:49:52.985989 2439 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 15:49:52.986199 kubelet[2439]: I1105 15:49:52.986185 2439 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:49:52.989063 kubelet[2439]: I1105 15:49:52.988995 2439 kubelet.go:480] "Attempting to sync node with API server" Nov 5 15:49:52.989063 kubelet[2439]: I1105 15:49:52.989029 2439 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:49:52.989063 kubelet[2439]: I1105 15:49:52.989056 2439 kubelet.go:386] "Adding apiserver pod source" Nov 5 15:49:52.989063 kubelet[2439]: I1105 15:49:52.989076 2439 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:49:52.994680 kubelet[2439]: I1105 15:49:52.994609 2439 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:49:52.995280 kubelet[2439]: I1105 15:49:52.995233 2439 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:49:52.996319 kubelet[2439]: W1105 15:49:52.996284 2439 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 15:49:52.997952 kubelet[2439]: E1105 15:49:52.997653 2439 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:49:52.997952 kubelet[2439]: E1105 15:49:52.997750 2439 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:49:52.999541 kubelet[2439]: I1105 15:49:52.999508 2439 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:49:52.999595 kubelet[2439]: I1105 15:49:52.999584 2439 server.go:1289] "Started kubelet" Nov 5 15:49:53.002909 kubelet[2439]: I1105 15:49:53.002848 2439 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:49:53.003478 kubelet[2439]: I1105 15:49:53.003211 2439 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:49:53.003979 kubelet[2439]: I1105 15:49:53.003942 2439 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:49:53.005543 kubelet[2439]: I1105 15:49:53.004537 2439 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:49:53.005787 kubelet[2439]: I1105 15:49:53.005770 2439 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:49:53.006253 kubelet[2439]: E1105 15:49:53.006231 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:49:53.007558 kubelet[2439]: I1105 15:49:53.007543 2439 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:49:53.007760 kubelet[2439]: I1105 15:49:53.007747 2439 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:49:53.008577 kubelet[2439]: E1105 15:49:53.008493 2439 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" Nov 5 15:49:53.008988 kubelet[2439]: E1105 15:49:53.008801 2439 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:49:53.010870 kubelet[2439]: I1105 15:49:53.010842 2439 server.go:317] "Adding debug handlers to kubelet server" Nov 5 15:49:53.013224 kubelet[2439]: E1105 15:49:53.008284 2439 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187527134ae0f40f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 15:49:52.999535631 +0000 UTC m=+0.995766298,LastTimestamp:2025-11-05 15:49:52.999535631 +0000 UTC m=+0.995766298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 15:49:53.013224 kubelet[2439]: I1105 15:49:53.012676 2439 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:49:53.013380 kubelet[2439]: I1105 15:49:53.013312 2439 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:49:53.013477 kubelet[2439]: I1105 15:49:53.013447 2439 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:49:53.015028 kubelet[2439]: E1105 15:49:53.014987 2439 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:49:53.015314 kubelet[2439]: I1105 15:49:53.015288 2439 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:49:53.029858 kubelet[2439]: I1105 15:49:53.029823 2439 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:49:53.029858 kubelet[2439]: I1105 15:49:53.029842 2439 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:49:53.029858 kubelet[2439]: I1105 15:49:53.029863 2439 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:49:53.107601 kubelet[2439]: E1105 15:49:53.107521 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:49:53.208795 kubelet[2439]: E1105 15:49:53.208604 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:49:53.210467 kubelet[2439]: E1105 15:49:53.209147 2439 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" Nov 5 15:49:53.309677 kubelet[2439]: E1105 15:49:53.309585 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:49:53.410225 kubelet[2439]: E1105 15:49:53.410146 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:49:53.488071 kubelet[2439]: I1105 15:49:53.487765 2439 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 15:49:53.489717 kubelet[2439]: I1105 15:49:53.489666 2439 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 15:49:53.489854 kubelet[2439]: I1105 15:49:53.489731 2439 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 15:49:53.489854 kubelet[2439]: I1105 15:49:53.489770 2439 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:49:53.489854 kubelet[2439]: I1105 15:49:53.489788 2439 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 15:49:53.489854 kubelet[2439]: E1105 15:49:53.489842 2439 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:49:53.490815 kubelet[2439]: E1105 15:49:53.490775 2439 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:49:53.511129 kubelet[2439]: E1105 15:49:53.511066 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:49:53.519584 kubelet[2439]: I1105 15:49:53.519531 2439 policy_none.go:49] "None policy: Start" Nov 5 15:49:53.519584 kubelet[2439]: I1105 15:49:53.519575 2439 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:49:53.519584 kubelet[2439]: I1105 15:49:53.519601 2439 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:49:53.590470 kubelet[2439]: E1105 15:49:53.590399 2439 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 5 15:49:53.611305 kubelet[2439]: E1105 15:49:53.611208 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:49:53.611518 kubelet[2439]: E1105 15:49:53.611414 2439 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" Nov 5 15:49:53.697073 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 15:49:53.710661 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 15:49:53.711900 kubelet[2439]: E1105 15:49:53.711867 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:49:53.728551 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 15:49:53.730689 kubelet[2439]: E1105 15:49:53.730532 2439 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:49:53.731141 kubelet[2439]: I1105 15:49:53.731113 2439 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:49:53.731270 kubelet[2439]: I1105 15:49:53.731137 2439 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:49:53.733050 kubelet[2439]: E1105 15:49:53.733029 2439 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:49:53.733146 kubelet[2439]: E1105 15:49:53.733067 2439 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 15:49:53.736617 kubelet[2439]: I1105 15:49:53.736578 2439 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:49:53.812453 kubelet[2439]: I1105 15:49:53.812386 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:49:53.812607 kubelet[2439]: I1105 15:49:53.812449 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:49:53.812607 kubelet[2439]: I1105 15:49:53.812516 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:49:53.812607 kubelet[2439]: I1105 15:49:53.812538 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:49:53.812607 kubelet[2439]: I1105 15:49:53.812559 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:49:53.833532 kubelet[2439]: I1105 15:49:53.833130 2439 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:49:53.833532 kubelet[2439]: E1105 15:49:53.833494 2439 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Nov 5 15:49:53.883930 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 5 15:49:53.907029 kubelet[2439]: E1105 15:49:53.906982 2439 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:49:53.913166 kubelet[2439]: I1105 15:49:53.913120 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 5 15:49:53.979417 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 5 15:49:53.981223 kubelet[2439]: E1105 15:49:53.981189 2439 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:49:54.013971 kubelet[2439]: I1105 15:49:54.013898 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89fdfd1761ad3a68992c81057c803388-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"89fdfd1761ad3a68992c81057c803388\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:49:54.013971 kubelet[2439]: I1105 15:49:54.013970 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89fdfd1761ad3a68992c81057c803388-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"89fdfd1761ad3a68992c81057c803388\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:49:54.014270 kubelet[2439]: I1105 15:49:54.014021 2439 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89fdfd1761ad3a68992c81057c803388-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"89fdfd1761ad3a68992c81057c803388\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:49:54.036416 kubelet[2439]: I1105 15:49:54.036354 2439 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:49:54.036870 kubelet[2439]: E1105 15:49:54.036836 2439 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Nov 5 15:49:54.049317 systemd[1]: Created slice kubepods-burstable-pod89fdfd1761ad3a68992c81057c803388.slice - libcontainer container kubepods-burstable-pod89fdfd1761ad3a68992c81057c803388.slice. Nov 5 15:49:54.052554 kubelet[2439]: E1105 15:49:54.052508 2439 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:49:54.208285 kubelet[2439]: E1105 15:49:54.208098 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:49:54.209163 containerd[1622]: time="2025-11-05T15:49:54.209026214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 5 15:49:54.238264 kubelet[2439]: E1105 15:49:54.238201 2439 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:49:54.279396 kubelet[2439]: E1105 15:49:54.279337 2439 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:49:54.281801 kubelet[2439]: E1105 15:49:54.281775 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:49:54.282466 containerd[1622]: time="2025-11-05T15:49:54.282408831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 5 15:49:54.301804 containerd[1622]: time="2025-11-05T15:49:54.301752167Z" level=info msg="connecting to shim fb21cfe68e626ebe094b84d8362d1808fd767870fade2c56eb8e9f95b1da588d" address="unix:///run/containerd/s/6c589c14aecb20d11c26089bdc61f3a4664bd754199347337b6e65f60c6b57d7" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:49:54.353085 kubelet[2439]: E1105 15:49:54.353042 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:49:54.353934 systemd[1]: Started cri-containerd-fb21cfe68e626ebe094b84d8362d1808fd767870fade2c56eb8e9f95b1da588d.scope - libcontainer container fb21cfe68e626ebe094b84d8362d1808fd767870fade2c56eb8e9f95b1da588d. Nov 5 15:49:54.354967 containerd[1622]: time="2025-11-05T15:49:54.354845515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:89fdfd1761ad3a68992c81057c803388,Namespace:kube-system,Attempt:0,}" Nov 5 15:49:54.377723 kubelet[2439]: E1105 15:49:54.377605 2439 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:49:54.412259 kubelet[2439]: E1105 15:49:54.412205 2439 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="1.6s" Nov 5 15:49:54.438473 kubelet[2439]: I1105 15:49:54.438418 2439 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:49:54.502065 kubelet[2439]: E1105 15:49:54.438984 2439 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Nov 5 15:49:54.865227 kubelet[2439]: E1105 15:49:54.865173 2439 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:49:55.074295 kubelet[2439]: E1105 15:49:55.074220 2439 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:49:55.241440 kubelet[2439]: I1105 15:49:55.241301 2439 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:49:55.241913 kubelet[2439]: E1105 15:49:55.241657 2439 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Nov 5 15:49:55.683317 kubelet[2439]: E1105 15:49:55.683151 2439 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187527134ae0f40f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 15:49:52.999535631 +0000 UTC m=+0.995766298,LastTimestamp:2025-11-05 15:49:52.999535631 +0000 UTC m=+0.995766298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 15:49:55.698885 containerd[1622]: time="2025-11-05T15:49:55.698838530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb21cfe68e626ebe094b84d8362d1808fd767870fade2c56eb8e9f95b1da588d\"" Nov 5 15:49:55.700284 kubelet[2439]: E1105 15:49:55.700255 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:49:55.772882 containerd[1622]: time="2025-11-05T15:49:55.772823767Z" level=info msg="CreateContainer within sandbox \"fb21cfe68e626ebe094b84d8362d1808fd767870fade2c56eb8e9f95b1da588d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 15:49:56.014729 kubelet[2439]: E1105 15:49:56.013843 2439 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="3.2s" Nov 5 15:49:56.103096 kubelet[2439]: E1105 15:49:56.102991 2439 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:49:56.157932 kubelet[2439]: E1105 15:49:56.157855 2439 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:49:56.348379 containerd[1622]: time="2025-11-05T15:49:56.348313027Z" level=info msg="connecting to shim 09525d82ceac577aad2f106fe6ddd806f3530b91f765181b4f1a6119bcec2916" address="unix:///run/containerd/s/58996c91b5b4a70e1f5b53bc881e8e82830c8fba20b2b83cd5209062c72f2f85" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:49:56.379348 systemd[1]: Started cri-containerd-09525d82ceac577aad2f106fe6ddd806f3530b91f765181b4f1a6119bcec2916.scope - libcontainer container 09525d82ceac577aad2f106fe6ddd806f3530b91f765181b4f1a6119bcec2916. Nov 5 15:49:56.405509 containerd[1622]: time="2025-11-05T15:49:56.405456151Z" level=info msg="Container 6bf64f6d048b65dfc4331285f44c409dcd84e85c76cc6c171f04c1fffc5bed6c: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:49:56.407726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677382099.mount: Deactivated successfully. Nov 5 15:49:56.421493 containerd[1622]: time="2025-11-05T15:49:56.421415231Z" level=info msg="connecting to shim b422717bf65bf075dcdfce7f0e8e300dd2efcddf2b5b5759f0db59c9bde2fbe5" address="unix:///run/containerd/s/f9b174c7421aea5c9cc92e0d9afa198cc1254124b50a5b338a6ddc9aa12ee642" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:49:56.452865 containerd[1622]: time="2025-11-05T15:49:56.452787613Z" level=info msg="CreateContainer within sandbox \"fb21cfe68e626ebe094b84d8362d1808fd767870fade2c56eb8e9f95b1da588d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6bf64f6d048b65dfc4331285f44c409dcd84e85c76cc6c171f04c1fffc5bed6c\"" Nov 5 15:49:56.454079 containerd[1622]: time="2025-11-05T15:49:56.454041759Z" level=info msg="StartContainer for \"6bf64f6d048b65dfc4331285f44c409dcd84e85c76cc6c171f04c1fffc5bed6c\"" Nov 5 15:49:56.455322 containerd[1622]: time="2025-11-05T15:49:56.455289955Z" level=info msg="connecting to shim 6bf64f6d048b65dfc4331285f44c409dcd84e85c76cc6c171f04c1fffc5bed6c" address="unix:///run/containerd/s/6c589c14aecb20d11c26089bdc61f3a4664bd754199347337b6e65f60c6b57d7" protocol=ttrpc version=3 Nov 5 15:49:56.461971 containerd[1622]: time="2025-11-05T15:49:56.461917489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"09525d82ceac577aad2f106fe6ddd806f3530b91f765181b4f1a6119bcec2916\"" Nov 5 15:49:56.463315 kubelet[2439]: E1105 15:49:56.463282 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:49:56.465886 systemd[1]: Started cri-containerd-b422717bf65bf075dcdfce7f0e8e300dd2efcddf2b5b5759f0db59c9bde2fbe5.scope - libcontainer container b422717bf65bf075dcdfce7f0e8e300dd2efcddf2b5b5759f0db59c9bde2fbe5. Nov 5 15:49:56.473873 containerd[1622]: time="2025-11-05T15:49:56.473793847Z" level=info msg="CreateContainer within sandbox \"09525d82ceac577aad2f106fe6ddd806f3530b91f765181b4f1a6119bcec2916\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 15:49:56.484896 systemd[1]: Started cri-containerd-6bf64f6d048b65dfc4331285f44c409dcd84e85c76cc6c171f04c1fffc5bed6c.scope - libcontainer container 6bf64f6d048b65dfc4331285f44c409dcd84e85c76cc6c171f04c1fffc5bed6c. Nov 5 15:49:56.488693 containerd[1622]: time="2025-11-05T15:49:56.488013195Z" level=info msg="Container 0db0dc9067e80f3b89a7d66fa30ab44cbbc241c99c8624b206ac2fbc51f64dae: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:49:56.511673 containerd[1622]: time="2025-11-05T15:49:56.510997059Z" level=info msg="CreateContainer within sandbox \"09525d82ceac577aad2f106fe6ddd806f3530b91f765181b4f1a6119bcec2916\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0db0dc9067e80f3b89a7d66fa30ab44cbbc241c99c8624b206ac2fbc51f64dae\"" Nov 5 15:49:56.511673 containerd[1622]: time="2025-11-05T15:49:56.511559720Z" level=info msg="StartContainer for \"0db0dc9067e80f3b89a7d66fa30ab44cbbc241c99c8624b206ac2fbc51f64dae\"" Nov 5 15:49:56.513427 containerd[1622]: time="2025-11-05T15:49:56.513383121Z" level=info msg="connecting to shim 0db0dc9067e80f3b89a7d66fa30ab44cbbc241c99c8624b206ac2fbc51f64dae" address="unix:///run/containerd/s/58996c91b5b4a70e1f5b53bc881e8e82830c8fba20b2b83cd5209062c72f2f85" protocol=ttrpc version=3 Nov 5 15:49:56.546495 containerd[1622]: time="2025-11-05T15:49:56.546432366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:89fdfd1761ad3a68992c81057c803388,Namespace:kube-system,Attempt:0,} returns sandbox id \"b422717bf65bf075dcdfce7f0e8e300dd2efcddf2b5b5759f0db59c9bde2fbe5\"" Nov 5 15:49:56.546989 systemd[1]: Started cri-containerd-0db0dc9067e80f3b89a7d66fa30ab44cbbc241c99c8624b206ac2fbc51f64dae.scope - libcontainer container 0db0dc9067e80f3b89a7d66fa30ab44cbbc241c99c8624b206ac2fbc51f64dae. Nov 5 15:49:56.548836 kubelet[2439]: E1105 15:49:56.548781 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:49:56.601045 kubelet[2439]: E1105 15:49:56.600780 2439 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:49:56.751081 containerd[1622]: time="2025-11-05T15:49:56.751009148Z" level=info msg="CreateContainer within sandbox \"b422717bf65bf075dcdfce7f0e8e300dd2efcddf2b5b5759f0db59c9bde2fbe5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 15:49:56.844518 kubelet[2439]: I1105 15:49:56.844092 2439 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:49:56.844518 kubelet[2439]: E1105 15:49:56.844473 2439 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Nov 5 15:49:56.869351 containerd[1622]: time="2025-11-05T15:49:56.869019133Z" level=info msg="StartContainer for \"0db0dc9067e80f3b89a7d66fa30ab44cbbc241c99c8624b206ac2fbc51f64dae\" returns successfully" Nov 5 15:49:56.869742 containerd[1622]: time="2025-11-05T15:49:56.869698505Z" level=info msg="StartContainer for \"6bf64f6d048b65dfc4331285f44c409dcd84e85c76cc6c171f04c1fffc5bed6c\" returns successfully" Nov 5 15:49:56.875351 kubelet[2439]: E1105 15:49:56.875291 2439 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:49:57.170488 containerd[1622]: time="2025-11-05T15:49:57.170356256Z" level=info msg="Container 0c13677a9e506c4ef625a05f8f7681b5219a97b50a031e390564f2591738aabc: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:49:57.499751 containerd[1622]: time="2025-11-05T15:49:57.499218338Z" level=info msg="CreateContainer within sandbox \"b422717bf65bf075dcdfce7f0e8e300dd2efcddf2b5b5759f0db59c9bde2fbe5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0c13677a9e506c4ef625a05f8f7681b5219a97b50a031e390564f2591738aabc\"" Nov 5 15:49:57.500052 containerd[1622]: time="2025-11-05T15:49:57.500022465Z" level=info msg="StartContainer for \"0c13677a9e506c4ef625a05f8f7681b5219a97b50a031e390564f2591738aabc\"" Nov 5 15:49:57.501383 containerd[1622]: time="2025-11-05T15:49:57.501353265Z" level=info msg="connecting to shim 0c13677a9e506c4ef625a05f8f7681b5219a97b50a031e390564f2591738aabc" address="unix:///run/containerd/s/f9b174c7421aea5c9cc92e0d9afa198cc1254124b50a5b338a6ddc9aa12ee642" protocol=ttrpc version=3 Nov 5 15:49:57.513258 kubelet[2439]: E1105 15:49:57.512605 2439 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:49:57.513258 kubelet[2439]: E1105 15:49:57.512747 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:49:57.516308 kubelet[2439]: E1105 15:49:57.516209 2439 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:49:57.516995 kubelet[2439]: E1105 15:49:57.516936 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:49:57.583988 systemd[1]: Started cri-containerd-0c13677a9e506c4ef625a05f8f7681b5219a97b50a031e390564f2591738aabc.scope - libcontainer container 0c13677a9e506c4ef625a05f8f7681b5219a97b50a031e390564f2591738aabc. Nov 5 15:49:57.689602 containerd[1622]: time="2025-11-05T15:49:57.689540041Z" level=info msg="StartContainer for \"0c13677a9e506c4ef625a05f8f7681b5219a97b50a031e390564f2591738aabc\" returns successfully" Nov 5 15:49:58.523204 kubelet[2439]: E1105 15:49:58.523140 2439 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:49:58.523709 kubelet[2439]: E1105 15:49:58.523259 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:49:58.524387 kubelet[2439]: E1105 15:49:58.524354 2439 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:49:58.524469 kubelet[2439]: E1105 15:49:58.524445 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:49:58.525181 kubelet[2439]: E1105 15:49:58.525147 2439 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:49:58.525236 kubelet[2439]: E1105 15:49:58.525229 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:49:59.524684 kubelet[2439]: E1105 15:49:59.524603 2439 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:49:59.525154 kubelet[2439]: E1105 15:49:59.524843 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:49:59.525154 kubelet[2439]: E1105 15:49:59.525025 2439 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:49:59.525154 kubelet[2439]: E1105 15:49:59.525127 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:49:59.834513 kubelet[2439]: E1105 15:49:59.834461 2439 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 5 15:50:00.047874 kubelet[2439]: I1105 15:50:00.047820 2439 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:50:00.057420 kubelet[2439]: I1105 15:50:00.057370 2439 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 15:50:00.057420 kubelet[2439]: E1105 15:50:00.057413 2439 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 5 15:50:00.067206 kubelet[2439]: E1105 15:50:00.067163 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:50:00.167830 kubelet[2439]: E1105 15:50:00.167675 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:50:00.267864 kubelet[2439]: E1105 15:50:00.267780 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:50:00.368993 kubelet[2439]: E1105 15:50:00.368913 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:50:00.470132 kubelet[2439]: E1105 15:50:00.469894 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:50:00.526112 kubelet[2439]: E1105 15:50:00.526077 2439 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:50:00.526571 kubelet[2439]: E1105 15:50:00.526146 2439 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:50:00.526571 kubelet[2439]: E1105 15:50:00.526212 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:00.526571 kubelet[2439]: E1105 15:50:00.526230 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:00.570583 kubelet[2439]: E1105 15:50:00.570528 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:50:00.671548 kubelet[2439]: E1105 15:50:00.671456 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:50:00.772427 kubelet[2439]: E1105 15:50:00.772245 2439 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:50:00.807574 kubelet[2439]: I1105 15:50:00.807516 2439 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:50:00.817650 kubelet[2439]: I1105 15:50:00.817594 2439 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 15:50:00.822334 kubelet[2439]: I1105 15:50:00.822291 2439 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:50:00.994849 kubelet[2439]: I1105 15:50:00.994770 2439 apiserver.go:52] "Watching apiserver" Nov 5 15:50:00.999312 kubelet[2439]: E1105 15:50:00.999285 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:01.008388 kubelet[2439]: I1105 15:50:01.008360 2439 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:50:01.526884 kubelet[2439]: E1105 15:50:01.526831 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:01.526884 kubelet[2439]: E1105 15:50:01.526859 2439 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:02.263227 systemd[1]: Reload requested from client PID 2723 ('systemctl') (unit session-7.scope)... Nov 5 15:50:02.263247 systemd[1]: Reloading... Nov 5 15:50:02.359685 zram_generator::config[2767]: No configuration found. Nov 5 15:50:02.843305 systemd[1]: Reloading finished in 579 ms. Nov 5 15:50:02.866797 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:02.888148 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:50:02.888499 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:02.888571 systemd[1]: kubelet.service: Consumed 1.591s CPU time, 132.1M memory peak. Nov 5 15:50:02.890762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:03.117470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:03.127021 (kubelet)[2812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:50:03.170007 kubelet[2812]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:50:03.170007 kubelet[2812]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:50:03.170007 kubelet[2812]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:50:03.170422 kubelet[2812]: I1105 15:50:03.170066 2812 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:50:03.178388 kubelet[2812]: I1105 15:50:03.178324 2812 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 15:50:03.178388 kubelet[2812]: I1105 15:50:03.178352 2812 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:50:03.178876 kubelet[2812]: I1105 15:50:03.178837 2812 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:50:03.181969 kubelet[2812]: I1105 15:50:03.181937 2812 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 15:50:03.185139 kubelet[2812]: I1105 15:50:03.185087 2812 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:50:03.191107 kubelet[2812]: I1105 15:50:03.191072 2812 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:50:03.196705 kubelet[2812]: I1105 15:50:03.196683 2812 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:50:03.196957 kubelet[2812]: I1105 15:50:03.196914 2812 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:50:03.197109 kubelet[2812]: I1105 15:50:03.196949 2812 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:50:03.197189 kubelet[2812]: I1105 15:50:03.197118 2812 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:50:03.197189 kubelet[2812]: I1105 15:50:03.197128 2812 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 15:50:03.197189 kubelet[2812]: I1105 15:50:03.197171 2812 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:50:03.197358 kubelet[2812]: I1105 15:50:03.197343 2812 kubelet.go:480] "Attempting to sync node with API server" Nov 5 15:50:03.197388 kubelet[2812]: I1105 15:50:03.197367 2812 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:50:03.197409 kubelet[2812]: I1105 15:50:03.197390 2812 kubelet.go:386] "Adding apiserver pod source" Nov 5 15:50:03.197409 kubelet[2812]: I1105 15:50:03.197409 2812 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:50:03.200158 kubelet[2812]: I1105 15:50:03.199944 2812 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:50:03.200771 kubelet[2812]: I1105 15:50:03.200740 2812 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:50:03.205760 kubelet[2812]: I1105 15:50:03.205731 2812 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:50:03.205884 kubelet[2812]: I1105 15:50:03.205862 2812 server.go:1289] "Started kubelet" Nov 5 15:50:03.206472 kubelet[2812]: I1105 15:50:03.206397 2812 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:50:03.206785 kubelet[2812]: I1105 15:50:03.206743 2812 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:50:03.207647 kubelet[2812]: I1105 15:50:03.207536 2812 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:50:03.211503 kubelet[2812]: I1105 15:50:03.211290 2812 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:50:03.212133 kubelet[2812]: I1105 15:50:03.212109 2812 server.go:317] "Adding debug handlers to kubelet server" Nov 5 15:50:03.220667 kubelet[2812]: E1105 15:50:03.220614 2812 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:50:03.222504 kubelet[2812]: I1105 15:50:03.221812 2812 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:50:03.224694 kubelet[2812]: I1105 15:50:03.224205 2812 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:50:03.224694 kubelet[2812]: I1105 15:50:03.224335 2812 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:50:03.224694 kubelet[2812]: I1105 15:50:03.224454 2812 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:50:03.230035 kubelet[2812]: I1105 15:50:03.229881 2812 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:50:03.231527 kubelet[2812]: I1105 15:50:03.231460 2812 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:50:03.232920 kubelet[2812]: I1105 15:50:03.232582 2812 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 15:50:03.234031 kubelet[2812]: I1105 15:50:03.233934 2812 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:50:03.236789 kubelet[2812]: I1105 15:50:03.236758 2812 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 15:50:03.236789 kubelet[2812]: I1105 15:50:03.236791 2812 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 15:50:03.236864 kubelet[2812]: I1105 15:50:03.236818 2812 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:50:03.236864 kubelet[2812]: I1105 15:50:03.236829 2812 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 15:50:03.236919 kubelet[2812]: E1105 15:50:03.236897 2812 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:50:03.313177 kubelet[2812]: I1105 15:50:03.313126 2812 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:50:03.313177 kubelet[2812]: I1105 15:50:03.313151 2812 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:50:03.313177 kubelet[2812]: I1105 15:50:03.313172 2812 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:50:03.313422 kubelet[2812]: I1105 15:50:03.313312 2812 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 15:50:03.313422 kubelet[2812]: I1105 15:50:03.313323 2812 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 15:50:03.313422 kubelet[2812]: I1105 15:50:03.313343 2812 policy_none.go:49] "None policy: Start" Nov 5 15:50:03.313422 kubelet[2812]: I1105 15:50:03.313353 2812 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:50:03.313422 kubelet[2812]: I1105 15:50:03.313366 2812 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:50:03.313584 kubelet[2812]: I1105 15:50:03.313478 2812 state_mem.go:75] "Updated machine memory state" Nov 5 15:50:03.319566 kubelet[2812]: E1105 15:50:03.319509 2812 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:50:03.321303 kubelet[2812]: I1105 15:50:03.321259 2812 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:50:03.321303 kubelet[2812]: I1105 15:50:03.321279 2812 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:50:03.322096 kubelet[2812]: I1105 15:50:03.322074 2812 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:50:03.325219 kubelet[2812]: E1105 15:50:03.325045 2812 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:50:03.337829 kubelet[2812]: I1105 15:50:03.337791 2812 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:50:03.338583 kubelet[2812]: I1105 15:50:03.338256 2812 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:50:03.339012 kubelet[2812]: I1105 15:50:03.338415 2812 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 15:50:03.352935 kubelet[2812]: E1105 15:50:03.352847 2812 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 15:50:03.353151 kubelet[2812]: E1105 15:50:03.353056 2812 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:50:03.353690 kubelet[2812]: E1105 15:50:03.353662 2812 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 15:50:03.425191 kubelet[2812]: I1105 15:50:03.425055 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:50:03.430400 kubelet[2812]: I1105 15:50:03.430175 2812 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:50:03.436851 kubelet[2812]: I1105 15:50:03.436823 2812 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 15:50:03.436991 kubelet[2812]: I1105 15:50:03.436891 2812 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 15:50:03.526171 kubelet[2812]: I1105 15:50:03.526099 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:50:03.526171 kubelet[2812]: I1105 15:50:03.526154 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:50:03.526171 kubelet[2812]: I1105 15:50:03.526182 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 5 15:50:03.526423 kubelet[2812]: I1105 15:50:03.526206 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89fdfd1761ad3a68992c81057c803388-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"89fdfd1761ad3a68992c81057c803388\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:50:03.526423 kubelet[2812]: I1105 15:50:03.526230 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89fdfd1761ad3a68992c81057c803388-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"89fdfd1761ad3a68992c81057c803388\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:50:03.526423 kubelet[2812]: I1105 15:50:03.526278 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:50:03.526423 kubelet[2812]: I1105 15:50:03.526301 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:50:03.526423 kubelet[2812]: I1105 15:50:03.526341 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89fdfd1761ad3a68992c81057c803388-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"89fdfd1761ad3a68992c81057c803388\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:50:03.653586 kubelet[2812]: E1105 15:50:03.653264 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:03.653586 kubelet[2812]: E1105 15:50:03.653401 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:03.654994 kubelet[2812]: E1105 15:50:03.654964 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:04.199299 kubelet[2812]: I1105 15:50:04.199232 2812 apiserver.go:52] "Watching apiserver" Nov 5 15:50:04.224951 kubelet[2812]: I1105 15:50:04.224887 2812 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:50:04.272706 kubelet[2812]: I1105 15:50:04.272219 2812 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 15:50:04.272706 kubelet[2812]: I1105 15:50:04.272310 2812 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:50:04.272706 kubelet[2812]: I1105 15:50:04.272320 2812 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:50:04.342664 kubelet[2812]: E1105 15:50:04.342579 2812 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 15:50:04.343132 kubelet[2812]: E1105 15:50:04.342841 2812 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:50:04.343132 kubelet[2812]: E1105 15:50:04.343065 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:04.343315 kubelet[2812]: E1105 15:50:04.343144 2812 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 15:50:04.343794 kubelet[2812]: E1105 15:50:04.343721 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:04.344399 kubelet[2812]: E1105 15:50:04.344327 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:04.384291 kubelet[2812]: I1105 15:50:04.384131 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.384062117 podStartE2EDuration="4.384062117s" podCreationTimestamp="2025-11-05 15:50:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:04.34329169 +0000 UTC m=+1.207967035" watchObservedRunningTime="2025-11-05 15:50:04.384062117 +0000 UTC m=+1.248737462" Nov 5 15:50:04.446058 kubelet[2812]: I1105 15:50:04.445943 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.445921465 podStartE2EDuration="4.445921465s" podCreationTimestamp="2025-11-05 15:50:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:04.384354838 +0000 UTC m=+1.249030213" watchObservedRunningTime="2025-11-05 15:50:04.445921465 +0000 UTC m=+1.310596810" Nov 5 15:50:04.477230 kubelet[2812]: I1105 15:50:04.477068 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.47705339 podStartE2EDuration="4.47705339s" podCreationTimestamp="2025-11-05 15:50:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:04.450883655 +0000 UTC m=+1.315559030" watchObservedRunningTime="2025-11-05 15:50:04.47705339 +0000 UTC m=+1.341728735" Nov 5 15:50:05.273789 kubelet[2812]: E1105 15:50:05.273714 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:05.273789 kubelet[2812]: E1105 15:50:05.273714 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:05.274305 kubelet[2812]: E1105 15:50:05.273922 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:06.275478 kubelet[2812]: E1105 15:50:06.275442 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:08.375039 kubelet[2812]: I1105 15:50:08.374923 2812 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 15:50:08.376027 containerd[1622]: time="2025-11-05T15:50:08.375478594Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 15:50:08.376499 kubelet[2812]: I1105 15:50:08.376462 2812 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 15:50:09.019343 systemd[1]: Created slice kubepods-besteffort-podb91e18f4_472c_4549_a214_96b28cc30c25.slice - libcontainer container kubepods-besteffort-podb91e18f4_472c_4549_a214_96b28cc30c25.slice. Nov 5 15:50:09.060450 kubelet[2812]: I1105 15:50:09.060366 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b91e18f4-472c-4549-a214-96b28cc30c25-kube-proxy\") pod \"kube-proxy-p4c6m\" (UID: \"b91e18f4-472c-4549-a214-96b28cc30c25\") " pod="kube-system/kube-proxy-p4c6m" Nov 5 15:50:09.060450 kubelet[2812]: I1105 15:50:09.060430 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b91e18f4-472c-4549-a214-96b28cc30c25-xtables-lock\") pod \"kube-proxy-p4c6m\" (UID: \"b91e18f4-472c-4549-a214-96b28cc30c25\") " pod="kube-system/kube-proxy-p4c6m" Nov 5 15:50:09.060450 kubelet[2812]: I1105 15:50:09.060447 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b91e18f4-472c-4549-a214-96b28cc30c25-lib-modules\") pod \"kube-proxy-p4c6m\" (UID: \"b91e18f4-472c-4549-a214-96b28cc30c25\") " pod="kube-system/kube-proxy-p4c6m" Nov 5 15:50:09.060450 kubelet[2812]: I1105 15:50:09.060465 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npfqc\" (UniqueName: \"kubernetes.io/projected/b91e18f4-472c-4549-a214-96b28cc30c25-kube-api-access-npfqc\") pod \"kube-proxy-p4c6m\" (UID: \"b91e18f4-472c-4549-a214-96b28cc30c25\") " pod="kube-system/kube-proxy-p4c6m" Nov 5 15:50:09.236662 systemd[1]: Created slice kubepods-besteffort-podac8ac6fd_ae18_4a52_baeb_f40a25dfa413.slice - libcontainer container kubepods-besteffort-podac8ac6fd_ae18_4a52_baeb_f40a25dfa413.slice. Nov 5 15:50:09.262680 kubelet[2812]: I1105 15:50:09.262626 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ac8ac6fd-ae18-4a52-baeb-f40a25dfa413-var-lib-calico\") pod \"tigera-operator-7dcd859c48-k2rrl\" (UID: \"ac8ac6fd-ae18-4a52-baeb-f40a25dfa413\") " pod="tigera-operator/tigera-operator-7dcd859c48-k2rrl" Nov 5 15:50:09.262788 kubelet[2812]: I1105 15:50:09.262685 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p89gq\" (UniqueName: \"kubernetes.io/projected/ac8ac6fd-ae18-4a52-baeb-f40a25dfa413-kube-api-access-p89gq\") pod \"tigera-operator-7dcd859c48-k2rrl\" (UID: \"ac8ac6fd-ae18-4a52-baeb-f40a25dfa413\") " pod="tigera-operator/tigera-operator-7dcd859c48-k2rrl" Nov 5 15:50:09.332167 kubelet[2812]: E1105 15:50:09.332119 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:09.332594 containerd[1622]: time="2025-11-05T15:50:09.332559113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p4c6m,Uid:b91e18f4-472c-4549-a214-96b28cc30c25,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:09.383393 containerd[1622]: time="2025-11-05T15:50:09.383333092Z" level=info msg="connecting to shim 3afd967d94f08a0e7d78a90ba22b95ffc948cb291fb27d75ba1c11048a8aebad" address="unix:///run/containerd/s/d37dfd8f74de1897739662b9555145927185bbb2042202fb64850280951f51ca" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:09.453802 systemd[1]: Started cri-containerd-3afd967d94f08a0e7d78a90ba22b95ffc948cb291fb27d75ba1c11048a8aebad.scope - libcontainer container 3afd967d94f08a0e7d78a90ba22b95ffc948cb291fb27d75ba1c11048a8aebad. Nov 5 15:50:09.495884 containerd[1622]: time="2025-11-05T15:50:09.495807596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p4c6m,Uid:b91e18f4-472c-4549-a214-96b28cc30c25,Namespace:kube-system,Attempt:0,} returns sandbox id \"3afd967d94f08a0e7d78a90ba22b95ffc948cb291fb27d75ba1c11048a8aebad\"" Nov 5 15:50:09.496479 kubelet[2812]: E1105 15:50:09.496448 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:09.542514 containerd[1622]: time="2025-11-05T15:50:09.542438866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-k2rrl,Uid:ac8ac6fd-ae18-4a52-baeb-f40a25dfa413,Namespace:tigera-operator,Attempt:0,}" Nov 5 15:50:09.555327 containerd[1622]: time="2025-11-05T15:50:09.555236538Z" level=info msg="CreateContainer within sandbox \"3afd967d94f08a0e7d78a90ba22b95ffc948cb291fb27d75ba1c11048a8aebad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 15:50:09.577722 containerd[1622]: time="2025-11-05T15:50:09.577670280Z" level=info msg="Container da9a4506fe0361e175bc34858cc5d3c29f20197d0b9e75eb5c86a8d745481eb2: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:09.587490 containerd[1622]: time="2025-11-05T15:50:09.587356313Z" level=info msg="CreateContainer within sandbox \"3afd967d94f08a0e7d78a90ba22b95ffc948cb291fb27d75ba1c11048a8aebad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"da9a4506fe0361e175bc34858cc5d3c29f20197d0b9e75eb5c86a8d745481eb2\"" Nov 5 15:50:09.588656 containerd[1622]: time="2025-11-05T15:50:09.588609870Z" level=info msg="connecting to shim 8b1e86de341a907b6c5a1275c41c48f134c9df22cf8f899c87578e5dd8f13205" address="unix:///run/containerd/s/51f4f5d8cf3a3f331b7c8d65cd8a23dfafbe2a21469f7d6ec305f4c4e9ab96ac" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:09.588856 containerd[1622]: time="2025-11-05T15:50:09.588829012Z" level=info msg="StartContainer for \"da9a4506fe0361e175bc34858cc5d3c29f20197d0b9e75eb5c86a8d745481eb2\"" Nov 5 15:50:09.591009 containerd[1622]: time="2025-11-05T15:50:09.590965658Z" level=info msg="connecting to shim da9a4506fe0361e175bc34858cc5d3c29f20197d0b9e75eb5c86a8d745481eb2" address="unix:///run/containerd/s/d37dfd8f74de1897739662b9555145927185bbb2042202fb64850280951f51ca" protocol=ttrpc version=3 Nov 5 15:50:09.620937 systemd[1]: Started cri-containerd-da9a4506fe0361e175bc34858cc5d3c29f20197d0b9e75eb5c86a8d745481eb2.scope - libcontainer container da9a4506fe0361e175bc34858cc5d3c29f20197d0b9e75eb5c86a8d745481eb2. Nov 5 15:50:09.628815 systemd[1]: Started cri-containerd-8b1e86de341a907b6c5a1275c41c48f134c9df22cf8f899c87578e5dd8f13205.scope - libcontainer container 8b1e86de341a907b6c5a1275c41c48f134c9df22cf8f899c87578e5dd8f13205. Nov 5 15:50:09.680531 containerd[1622]: time="2025-11-05T15:50:09.680476724Z" level=info msg="StartContainer for \"da9a4506fe0361e175bc34858cc5d3c29f20197d0b9e75eb5c86a8d745481eb2\" returns successfully" Nov 5 15:50:09.689422 containerd[1622]: time="2025-11-05T15:50:09.689371208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-k2rrl,Uid:ac8ac6fd-ae18-4a52-baeb-f40a25dfa413,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8b1e86de341a907b6c5a1275c41c48f134c9df22cf8f899c87578e5dd8f13205\"" Nov 5 15:50:09.690946 containerd[1622]: time="2025-11-05T15:50:09.690921674Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 15:50:10.284494 kubelet[2812]: E1105 15:50:10.283345 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:10.302014 kubelet[2812]: I1105 15:50:10.301948 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p4c6m" podStartSLOduration=2.301929314 podStartE2EDuration="2.301929314s" podCreationTimestamp="2025-11-05 15:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:10.301537938 +0000 UTC m=+7.166213303" watchObservedRunningTime="2025-11-05 15:50:10.301929314 +0000 UTC m=+7.166604649" Nov 5 15:50:11.050518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4072684831.mount: Deactivated successfully. Nov 5 15:50:11.554963 containerd[1622]: time="2025-11-05T15:50:11.554890680Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:11.555915 containerd[1622]: time="2025-11-05T15:50:11.555840615Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 5 15:50:11.557673 containerd[1622]: time="2025-11-05T15:50:11.557605032Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:11.561617 containerd[1622]: time="2025-11-05T15:50:11.561571748Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:11.562455 containerd[1622]: time="2025-11-05T15:50:11.562385078Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.871318011s" Nov 5 15:50:11.562455 containerd[1622]: time="2025-11-05T15:50:11.562446212Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 15:50:11.566581 containerd[1622]: time="2025-11-05T15:50:11.566527314Z" level=info msg="CreateContainer within sandbox \"8b1e86de341a907b6c5a1275c41c48f134c9df22cf8f899c87578e5dd8f13205\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 15:50:11.576027 containerd[1622]: time="2025-11-05T15:50:11.575954594Z" level=info msg="Container c16d688dcf6dbcd12e1b633bc7f97a261171b4012c26e8456450557e5e3f5860: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:11.584113 containerd[1622]: time="2025-11-05T15:50:11.584050983Z" level=info msg="CreateContainer within sandbox \"8b1e86de341a907b6c5a1275c41c48f134c9df22cf8f899c87578e5dd8f13205\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c16d688dcf6dbcd12e1b633bc7f97a261171b4012c26e8456450557e5e3f5860\"" Nov 5 15:50:11.584977 containerd[1622]: time="2025-11-05T15:50:11.584920556Z" level=info msg="StartContainer for \"c16d688dcf6dbcd12e1b633bc7f97a261171b4012c26e8456450557e5e3f5860\"" Nov 5 15:50:11.586210 containerd[1622]: time="2025-11-05T15:50:11.586153765Z" level=info msg="connecting to shim c16d688dcf6dbcd12e1b633bc7f97a261171b4012c26e8456450557e5e3f5860" address="unix:///run/containerd/s/51f4f5d8cf3a3f331b7c8d65cd8a23dfafbe2a21469f7d6ec305f4c4e9ab96ac" protocol=ttrpc version=3 Nov 5 15:50:11.612897 systemd[1]: Started cri-containerd-c16d688dcf6dbcd12e1b633bc7f97a261171b4012c26e8456450557e5e3f5860.scope - libcontainer container c16d688dcf6dbcd12e1b633bc7f97a261171b4012c26e8456450557e5e3f5860. Nov 5 15:50:11.652206 containerd[1622]: time="2025-11-05T15:50:11.652142743Z" level=info msg="StartContainer for \"c16d688dcf6dbcd12e1b633bc7f97a261171b4012c26e8456450557e5e3f5860\" returns successfully" Nov 5 15:50:12.300967 kubelet[2812]: I1105 15:50:12.300878 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-k2rrl" podStartSLOduration=1.428159625 podStartE2EDuration="3.300860374s" podCreationTimestamp="2025-11-05 15:50:09 +0000 UTC" firstStartedPulling="2025-11-05 15:50:09.690496024 +0000 UTC m=+6.555171369" lastFinishedPulling="2025-11-05 15:50:11.563196773 +0000 UTC m=+8.427872118" observedRunningTime="2025-11-05 15:50:12.300749815 +0000 UTC m=+9.165425170" watchObservedRunningTime="2025-11-05 15:50:12.300860374 +0000 UTC m=+9.165535719" Nov 5 15:50:12.378522 kubelet[2812]: E1105 15:50:12.378480 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:12.925664 kubelet[2812]: E1105 15:50:12.925304 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:13.293243 kubelet[2812]: E1105 15:50:13.293193 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:13.295172 kubelet[2812]: E1105 15:50:13.295136 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:14.295147 kubelet[2812]: E1105 15:50:14.295094 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:14.514425 kubelet[2812]: E1105 15:50:14.514355 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:16.265343 sudo[1825]: pam_unix(sudo:session): session closed for user root Nov 5 15:50:16.271687 sshd[1824]: Connection closed by 10.0.0.1 port 39312 Nov 5 15:50:16.272154 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:16.280335 systemd-logind[1596]: Session 7 logged out. Waiting for processes to exit. Nov 5 15:50:16.281467 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:39312.service: Deactivated successfully. Nov 5 15:50:16.289602 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 15:50:16.290026 systemd[1]: session-7.scope: Consumed 8.107s CPU time, 211.5M memory peak. Nov 5 15:50:16.306077 systemd-logind[1596]: Removed session 7. Nov 5 15:50:21.179079 systemd[1]: Created slice kubepods-besteffort-podc6881202_c068_40aa_9b3f_86b23663d5c5.slice - libcontainer container kubepods-besteffort-podc6881202_c068_40aa_9b3f_86b23663d5c5.slice. Nov 5 15:50:21.250321 kubelet[2812]: I1105 15:50:21.250232 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c6881202-c068-40aa-9b3f-86b23663d5c5-typha-certs\") pod \"calico-typha-557f6698f7-gw5g8\" (UID: \"c6881202-c068-40aa-9b3f-86b23663d5c5\") " pod="calico-system/calico-typha-557f6698f7-gw5g8" Nov 5 15:50:21.250321 kubelet[2812]: I1105 15:50:21.250290 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lszwr\" (UniqueName: \"kubernetes.io/projected/c6881202-c068-40aa-9b3f-86b23663d5c5-kube-api-access-lszwr\") pod \"calico-typha-557f6698f7-gw5g8\" (UID: \"c6881202-c068-40aa-9b3f-86b23663d5c5\") " pod="calico-system/calico-typha-557f6698f7-gw5g8" Nov 5 15:50:21.250321 kubelet[2812]: I1105 15:50:21.250311 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6881202-c068-40aa-9b3f-86b23663d5c5-tigera-ca-bundle\") pod \"calico-typha-557f6698f7-gw5g8\" (UID: \"c6881202-c068-40aa-9b3f-86b23663d5c5\") " pod="calico-system/calico-typha-557f6698f7-gw5g8" Nov 5 15:50:21.352112 kubelet[2812]: I1105 15:50:21.351984 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d3e1f76-a803-4146-9fe9-ae2a598ff35e-tigera-ca-bundle\") pod \"calico-node-gft9w\" (UID: \"7d3e1f76-a803-4146-9fe9-ae2a598ff35e\") " pod="calico-system/calico-node-gft9w" Nov 5 15:50:21.352112 kubelet[2812]: I1105 15:50:21.352067 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d3e1f76-a803-4146-9fe9-ae2a598ff35e-lib-modules\") pod \"calico-node-gft9w\" (UID: \"7d3e1f76-a803-4146-9fe9-ae2a598ff35e\") " pod="calico-system/calico-node-gft9w" Nov 5 15:50:21.353305 kubelet[2812]: I1105 15:50:21.352359 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7d3e1f76-a803-4146-9fe9-ae2a598ff35e-var-lib-calico\") pod \"calico-node-gft9w\" (UID: \"7d3e1f76-a803-4146-9fe9-ae2a598ff35e\") " pod="calico-system/calico-node-gft9w" Nov 5 15:50:21.353305 kubelet[2812]: I1105 15:50:21.352461 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7d3e1f76-a803-4146-9fe9-ae2a598ff35e-cni-bin-dir\") pod \"calico-node-gft9w\" (UID: \"7d3e1f76-a803-4146-9fe9-ae2a598ff35e\") " pod="calico-system/calico-node-gft9w" Nov 5 15:50:21.353305 kubelet[2812]: I1105 15:50:21.352485 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjp2j\" (UniqueName: \"kubernetes.io/projected/7d3e1f76-a803-4146-9fe9-ae2a598ff35e-kube-api-access-pjp2j\") pod \"calico-node-gft9w\" (UID: \"7d3e1f76-a803-4146-9fe9-ae2a598ff35e\") " pod="calico-system/calico-node-gft9w" Nov 5 15:50:21.353305 kubelet[2812]: I1105 15:50:21.352530 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7d3e1f76-a803-4146-9fe9-ae2a598ff35e-policysync\") pod \"calico-node-gft9w\" (UID: \"7d3e1f76-a803-4146-9fe9-ae2a598ff35e\") " pod="calico-system/calico-node-gft9w" Nov 5 15:50:21.353305 kubelet[2812]: I1105 15:50:21.352576 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7d3e1f76-a803-4146-9fe9-ae2a598ff35e-cni-net-dir\") pod \"calico-node-gft9w\" (UID: \"7d3e1f76-a803-4146-9fe9-ae2a598ff35e\") " pod="calico-system/calico-node-gft9w" Nov 5 15:50:21.353501 kubelet[2812]: I1105 15:50:21.352597 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7d3e1f76-a803-4146-9fe9-ae2a598ff35e-flexvol-driver-host\") pod \"calico-node-gft9w\" (UID: \"7d3e1f76-a803-4146-9fe9-ae2a598ff35e\") " pod="calico-system/calico-node-gft9w" Nov 5 15:50:21.353501 kubelet[2812]: I1105 15:50:21.352647 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7d3e1f76-a803-4146-9fe9-ae2a598ff35e-cni-log-dir\") pod \"calico-node-gft9w\" (UID: \"7d3e1f76-a803-4146-9fe9-ae2a598ff35e\") " pod="calico-system/calico-node-gft9w" Nov 5 15:50:21.353501 kubelet[2812]: I1105 15:50:21.352669 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7d3e1f76-a803-4146-9fe9-ae2a598ff35e-node-certs\") pod \"calico-node-gft9w\" (UID: \"7d3e1f76-a803-4146-9fe9-ae2a598ff35e\") " pod="calico-system/calico-node-gft9w" Nov 5 15:50:21.353501 kubelet[2812]: I1105 15:50:21.352685 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7d3e1f76-a803-4146-9fe9-ae2a598ff35e-var-run-calico\") pod \"calico-node-gft9w\" (UID: \"7d3e1f76-a803-4146-9fe9-ae2a598ff35e\") " pod="calico-system/calico-node-gft9w" Nov 5 15:50:21.353501 kubelet[2812]: I1105 15:50:21.352709 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d3e1f76-a803-4146-9fe9-ae2a598ff35e-xtables-lock\") pod \"calico-node-gft9w\" (UID: \"7d3e1f76-a803-4146-9fe9-ae2a598ff35e\") " pod="calico-system/calico-node-gft9w" Nov 5 15:50:21.372891 systemd[1]: Created slice kubepods-besteffort-pod7d3e1f76_a803_4146_9fe9_ae2a598ff35e.slice - libcontainer container kubepods-besteffort-pod7d3e1f76_a803_4146_9fe9_ae2a598ff35e.slice. Nov 5 15:50:21.460928 kubelet[2812]: E1105 15:50:21.460796 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.460928 kubelet[2812]: W1105 15:50:21.460824 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.460928 kubelet[2812]: E1105 15:50:21.460863 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.464756 kubelet[2812]: E1105 15:50:21.464731 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.464756 kubelet[2812]: W1105 15:50:21.464751 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.464944 kubelet[2812]: E1105 15:50:21.464769 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.483124 kubelet[2812]: E1105 15:50:21.483049 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:21.483893 containerd[1622]: time="2025-11-05T15:50:21.483806350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-557f6698f7-gw5g8,Uid:c6881202-c068-40aa-9b3f-86b23663d5c5,Namespace:calico-system,Attempt:0,}" Nov 5 15:50:21.511588 containerd[1622]: time="2025-11-05T15:50:21.511513694Z" level=info msg="connecting to shim f7293b6346fbac0dc21fd200338e97b4059f66f02e5545aa47c0c98b12709d48" address="unix:///run/containerd/s/9e215f528a4391f4e63b765e7c688757685385f949e940a7aa5b7b824ced1b2f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:21.554408 systemd[1]: Started cri-containerd-f7293b6346fbac0dc21fd200338e97b4059f66f02e5545aa47c0c98b12709d48.scope - libcontainer container f7293b6346fbac0dc21fd200338e97b4059f66f02e5545aa47c0c98b12709d48. Nov 5 15:50:21.597337 kubelet[2812]: E1105 15:50:21.597013 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fldj" podUID="908be0d9-6b2b-4915-9d34-62f14a2dce18" Nov 5 15:50:21.631659 containerd[1622]: time="2025-11-05T15:50:21.631592487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-557f6698f7-gw5g8,Uid:c6881202-c068-40aa-9b3f-86b23663d5c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7293b6346fbac0dc21fd200338e97b4059f66f02e5545aa47c0c98b12709d48\"" Nov 5 15:50:21.632618 kubelet[2812]: E1105 15:50:21.632578 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:21.633616 containerd[1622]: time="2025-11-05T15:50:21.633551424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 15:50:21.648672 kubelet[2812]: E1105 15:50:21.648618 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.648672 kubelet[2812]: W1105 15:50:21.648660 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.648888 kubelet[2812]: E1105 15:50:21.648689 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.648988 kubelet[2812]: E1105 15:50:21.648955 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.648988 kubelet[2812]: W1105 15:50:21.648970 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.648988 kubelet[2812]: E1105 15:50:21.648982 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.649255 kubelet[2812]: E1105 15:50:21.649236 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.649255 kubelet[2812]: W1105 15:50:21.649251 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.649385 kubelet[2812]: E1105 15:50:21.649263 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.649626 kubelet[2812]: E1105 15:50:21.649593 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.649626 kubelet[2812]: W1105 15:50:21.649608 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.649626 kubelet[2812]: E1105 15:50:21.649620 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.650076 kubelet[2812]: E1105 15:50:21.650054 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.650076 kubelet[2812]: W1105 15:50:21.650071 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.650185 kubelet[2812]: E1105 15:50:21.650087 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.650437 kubelet[2812]: E1105 15:50:21.650364 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.650437 kubelet[2812]: W1105 15:50:21.650384 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.650437 kubelet[2812]: E1105 15:50:21.650399 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.650707 kubelet[2812]: E1105 15:50:21.650654 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.650707 kubelet[2812]: W1105 15:50:21.650665 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.650707 kubelet[2812]: E1105 15:50:21.650678 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.651023 kubelet[2812]: E1105 15:50:21.650963 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.651023 kubelet[2812]: W1105 15:50:21.650981 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.651023 kubelet[2812]: E1105 15:50:21.650993 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.651269 kubelet[2812]: E1105 15:50:21.651241 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.651269 kubelet[2812]: W1105 15:50:21.651258 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.651269 kubelet[2812]: E1105 15:50:21.651269 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.651596 kubelet[2812]: E1105 15:50:21.651565 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.651669 kubelet[2812]: W1105 15:50:21.651595 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.651669 kubelet[2812]: E1105 15:50:21.651627 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.651920 kubelet[2812]: E1105 15:50:21.651906 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.651920 kubelet[2812]: W1105 15:50:21.651918 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.652002 kubelet[2812]: E1105 15:50:21.651928 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.652529 kubelet[2812]: E1105 15:50:21.652286 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.652529 kubelet[2812]: W1105 15:50:21.652309 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.652529 kubelet[2812]: E1105 15:50:21.652323 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.652683 kubelet[2812]: E1105 15:50:21.652595 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.652683 kubelet[2812]: W1105 15:50:21.652606 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.652683 kubelet[2812]: E1105 15:50:21.652618 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.653043 kubelet[2812]: E1105 15:50:21.652844 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.653043 kubelet[2812]: W1105 15:50:21.652860 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.653043 kubelet[2812]: E1105 15:50:21.652869 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.653215 kubelet[2812]: E1105 15:50:21.653189 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.653215 kubelet[2812]: W1105 15:50:21.653207 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.653303 kubelet[2812]: E1105 15:50:21.653219 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.653505 kubelet[2812]: E1105 15:50:21.653483 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.653505 kubelet[2812]: W1105 15:50:21.653504 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.653599 kubelet[2812]: E1105 15:50:21.653526 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.653868 kubelet[2812]: E1105 15:50:21.653841 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.653868 kubelet[2812]: W1105 15:50:21.653855 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.653868 kubelet[2812]: E1105 15:50:21.653868 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.654058 kubelet[2812]: E1105 15:50:21.654043 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.654058 kubelet[2812]: W1105 15:50:21.654054 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.654126 kubelet[2812]: E1105 15:50:21.654064 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.654275 kubelet[2812]: E1105 15:50:21.654262 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.654307 kubelet[2812]: W1105 15:50:21.654273 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.654307 kubelet[2812]: E1105 15:50:21.654284 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.654507 kubelet[2812]: E1105 15:50:21.654491 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.654507 kubelet[2812]: W1105 15:50:21.654505 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.654601 kubelet[2812]: E1105 15:50:21.654516 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.654978 kubelet[2812]: E1105 15:50:21.654954 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.654978 kubelet[2812]: W1105 15:50:21.654967 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.655053 kubelet[2812]: E1105 15:50:21.654978 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.655053 kubelet[2812]: I1105 15:50:21.655007 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/908be0d9-6b2b-4915-9d34-62f14a2dce18-registration-dir\") pod \"csi-node-driver-5fldj\" (UID: \"908be0d9-6b2b-4915-9d34-62f14a2dce18\") " pod="calico-system/csi-node-driver-5fldj" Nov 5 15:50:21.656607 kubelet[2812]: E1105 15:50:21.656570 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.656607 kubelet[2812]: W1105 15:50:21.656594 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.656607 kubelet[2812]: E1105 15:50:21.656608 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.656894 kubelet[2812]: I1105 15:50:21.656662 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/908be0d9-6b2b-4915-9d34-62f14a2dce18-varrun\") pod \"csi-node-driver-5fldj\" (UID: \"908be0d9-6b2b-4915-9d34-62f14a2dce18\") " pod="calico-system/csi-node-driver-5fldj" Nov 5 15:50:21.656989 kubelet[2812]: E1105 15:50:21.656939 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.656989 kubelet[2812]: W1105 15:50:21.656954 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.656989 kubelet[2812]: E1105 15:50:21.656969 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.657284 kubelet[2812]: E1105 15:50:21.657225 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.657284 kubelet[2812]: W1105 15:50:21.657237 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.657284 kubelet[2812]: E1105 15:50:21.657249 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.657603 kubelet[2812]: E1105 15:50:21.657497 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.657603 kubelet[2812]: W1105 15:50:21.657508 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.657603 kubelet[2812]: E1105 15:50:21.657521 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.657603 kubelet[2812]: I1105 15:50:21.657569 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffvt6\" (UniqueName: \"kubernetes.io/projected/908be0d9-6b2b-4915-9d34-62f14a2dce18-kube-api-access-ffvt6\") pod \"csi-node-driver-5fldj\" (UID: \"908be0d9-6b2b-4915-9d34-62f14a2dce18\") " pod="calico-system/csi-node-driver-5fldj" Nov 5 15:50:21.658034 kubelet[2812]: E1105 15:50:21.657964 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.658034 kubelet[2812]: W1105 15:50:21.658013 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.658034 kubelet[2812]: E1105 15:50:21.658026 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.658160 kubelet[2812]: I1105 15:50:21.658102 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/908be0d9-6b2b-4915-9d34-62f14a2dce18-kubelet-dir\") pod \"csi-node-driver-5fldj\" (UID: \"908be0d9-6b2b-4915-9d34-62f14a2dce18\") " pod="calico-system/csi-node-driver-5fldj" Nov 5 15:50:21.658469 kubelet[2812]: E1105 15:50:21.658447 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.658469 kubelet[2812]: W1105 15:50:21.658463 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.658576 kubelet[2812]: E1105 15:50:21.658475 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.658859 kubelet[2812]: E1105 15:50:21.658839 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.658859 kubelet[2812]: W1105 15:50:21.658860 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.658943 kubelet[2812]: E1105 15:50:21.658873 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.659251 kubelet[2812]: E1105 15:50:21.659226 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.659326 kubelet[2812]: W1105 15:50:21.659248 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.659326 kubelet[2812]: E1105 15:50:21.659272 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.659326 kubelet[2812]: I1105 15:50:21.659311 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/908be0d9-6b2b-4915-9d34-62f14a2dce18-socket-dir\") pod \"csi-node-driver-5fldj\" (UID: \"908be0d9-6b2b-4915-9d34-62f14a2dce18\") " pod="calico-system/csi-node-driver-5fldj" Nov 5 15:50:21.659592 kubelet[2812]: E1105 15:50:21.659572 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.659592 kubelet[2812]: W1105 15:50:21.659589 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.659699 kubelet[2812]: E1105 15:50:21.659602 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.659865 kubelet[2812]: E1105 15:50:21.659838 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.659919 kubelet[2812]: W1105 15:50:21.659860 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.659919 kubelet[2812]: E1105 15:50:21.659880 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.660117 kubelet[2812]: E1105 15:50:21.660102 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.660117 kubelet[2812]: W1105 15:50:21.660113 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.660201 kubelet[2812]: E1105 15:50:21.660123 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.660352 kubelet[2812]: E1105 15:50:21.660336 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.660352 kubelet[2812]: W1105 15:50:21.660347 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.660408 kubelet[2812]: E1105 15:50:21.660357 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.660604 kubelet[2812]: E1105 15:50:21.660588 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.660604 kubelet[2812]: W1105 15:50:21.660599 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.660773 kubelet[2812]: E1105 15:50:21.660609 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.660964 kubelet[2812]: E1105 15:50:21.660834 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.660964 kubelet[2812]: W1105 15:50:21.660846 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.660964 kubelet[2812]: E1105 15:50:21.660855 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.691654 kubelet[2812]: E1105 15:50:21.691470 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:21.694777 containerd[1622]: time="2025-11-05T15:50:21.694722070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gft9w,Uid:7d3e1f76-a803-4146-9fe9-ae2a598ff35e,Namespace:calico-system,Attempt:0,}" Nov 5 15:50:21.722065 containerd[1622]: time="2025-11-05T15:50:21.721419117Z" level=info msg="connecting to shim 034fc7b01bae511ee75fe30b11628b0e3d925e10c5bda6ab84e2291b9311c5d0" address="unix:///run/containerd/s/81bf8cfe54fbd7fecc807da03bc33137b08c04535d13dbc0dd5948666bfc0068" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:21.746846 systemd[1]: Started cri-containerd-034fc7b01bae511ee75fe30b11628b0e3d925e10c5bda6ab84e2291b9311c5d0.scope - libcontainer container 034fc7b01bae511ee75fe30b11628b0e3d925e10c5bda6ab84e2291b9311c5d0. Nov 5 15:50:21.761971 kubelet[2812]: E1105 15:50:21.761929 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.761971 kubelet[2812]: W1105 15:50:21.761956 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.762113 kubelet[2812]: E1105 15:50:21.761981 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.762470 kubelet[2812]: E1105 15:50:21.762430 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.762470 kubelet[2812]: W1105 15:50:21.762468 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.762559 kubelet[2812]: E1105 15:50:21.762485 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.762821 kubelet[2812]: E1105 15:50:21.762795 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.762821 kubelet[2812]: W1105 15:50:21.762810 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.762903 kubelet[2812]: E1105 15:50:21.762847 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.763193 kubelet[2812]: E1105 15:50:21.763164 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.763262 kubelet[2812]: W1105 15:50:21.763243 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.763305 kubelet[2812]: E1105 15:50:21.763262 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.763694 kubelet[2812]: E1105 15:50:21.763670 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.763694 kubelet[2812]: W1105 15:50:21.763685 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.763762 kubelet[2812]: E1105 15:50:21.763696 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.764091 kubelet[2812]: E1105 15:50:21.764066 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.764148 kubelet[2812]: W1105 15:50:21.764078 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.764148 kubelet[2812]: E1105 15:50:21.764141 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.764493 kubelet[2812]: E1105 15:50:21.764463 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.764493 kubelet[2812]: W1105 15:50:21.764478 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.764493 kubelet[2812]: E1105 15:50:21.764491 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.765755 kubelet[2812]: E1105 15:50:21.765575 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.765755 kubelet[2812]: W1105 15:50:21.765590 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.765755 kubelet[2812]: E1105 15:50:21.765602 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.766140 kubelet[2812]: E1105 15:50:21.765951 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.766140 kubelet[2812]: W1105 15:50:21.765965 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.766140 kubelet[2812]: E1105 15:50:21.765979 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.766747 kubelet[2812]: E1105 15:50:21.766530 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.766747 kubelet[2812]: W1105 15:50:21.766558 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.766747 kubelet[2812]: E1105 15:50:21.766571 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.766930 kubelet[2812]: E1105 15:50:21.766907 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.766930 kubelet[2812]: W1105 15:50:21.766925 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.767020 kubelet[2812]: E1105 15:50:21.766940 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.767988 kubelet[2812]: E1105 15:50:21.767958 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.767988 kubelet[2812]: W1105 15:50:21.767975 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.767988 kubelet[2812]: E1105 15:50:21.767988 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.768244 kubelet[2812]: E1105 15:50:21.768216 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.768244 kubelet[2812]: W1105 15:50:21.768236 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.768436 kubelet[2812]: E1105 15:50:21.768249 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.770916 kubelet[2812]: E1105 15:50:21.770880 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.770916 kubelet[2812]: W1105 15:50:21.770900 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.770916 kubelet[2812]: E1105 15:50:21.770916 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.771193 kubelet[2812]: E1105 15:50:21.771167 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.771193 kubelet[2812]: W1105 15:50:21.771183 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.771193 kubelet[2812]: E1105 15:50:21.771194 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.771528 kubelet[2812]: E1105 15:50:21.771511 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.771528 kubelet[2812]: W1105 15:50:21.771526 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.771653 kubelet[2812]: E1105 15:50:21.771550 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.771858 kubelet[2812]: E1105 15:50:21.771832 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.771858 kubelet[2812]: W1105 15:50:21.771846 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.771858 kubelet[2812]: E1105 15:50:21.771857 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.772098 kubelet[2812]: E1105 15:50:21.772082 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.772098 kubelet[2812]: W1105 15:50:21.772095 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.772220 kubelet[2812]: E1105 15:50:21.772106 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.772557 kubelet[2812]: E1105 15:50:21.772526 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.772557 kubelet[2812]: W1105 15:50:21.772552 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.772677 kubelet[2812]: E1105 15:50:21.772563 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.773185 kubelet[2812]: E1105 15:50:21.773167 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.773342 kubelet[2812]: W1105 15:50:21.773324 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.773453 kubelet[2812]: E1105 15:50:21.773438 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.774066 kubelet[2812]: E1105 15:50:21.773915 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.774066 kubelet[2812]: W1105 15:50:21.773942 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.774066 kubelet[2812]: E1105 15:50:21.773958 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.774424 kubelet[2812]: E1105 15:50:21.774286 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.774424 kubelet[2812]: W1105 15:50:21.774301 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.774424 kubelet[2812]: E1105 15:50:21.774313 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.774921 kubelet[2812]: E1105 15:50:21.774816 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.775121 kubelet[2812]: W1105 15:50:21.774984 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.775121 kubelet[2812]: E1105 15:50:21.775002 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.775254 kubelet[2812]: E1105 15:50:21.775239 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.775528 kubelet[2812]: W1105 15:50:21.775363 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.775528 kubelet[2812]: E1105 15:50:21.775383 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.776041 kubelet[2812]: E1105 15:50:21.775986 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.776041 kubelet[2812]: W1105 15:50:21.775999 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.776041 kubelet[2812]: E1105 15:50:21.776011 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:21.778198 containerd[1622]: time="2025-11-05T15:50:21.778163302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gft9w,Uid:7d3e1f76-a803-4146-9fe9-ae2a598ff35e,Namespace:calico-system,Attempt:0,} returns sandbox id \"034fc7b01bae511ee75fe30b11628b0e3d925e10c5bda6ab84e2291b9311c5d0\"" Nov 5 15:50:21.779139 kubelet[2812]: E1105 15:50:21.779116 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:21.784387 kubelet[2812]: E1105 15:50:21.784347 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:21.784387 kubelet[2812]: W1105 15:50:21.784367 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:21.784387 kubelet[2812]: E1105 15:50:21.784388 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:23.238461 kubelet[2812]: E1105 15:50:23.238395 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fldj" podUID="908be0d9-6b2b-4915-9d34-62f14a2dce18" Nov 5 15:50:23.364601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1339519344.mount: Deactivated successfully. Nov 5 15:50:24.213444 containerd[1622]: time="2025-11-05T15:50:24.213365634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:24.216615 containerd[1622]: time="2025-11-05T15:50:24.216560061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 5 15:50:24.218896 containerd[1622]: time="2025-11-05T15:50:24.218856431Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:24.221669 containerd[1622]: time="2025-11-05T15:50:24.221010264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:24.221943 containerd[1622]: time="2025-11-05T15:50:24.221876170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.58827411s" Nov 5 15:50:24.221943 containerd[1622]: time="2025-11-05T15:50:24.221911476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 15:50:24.223130 containerd[1622]: time="2025-11-05T15:50:24.223057327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 15:50:24.238832 containerd[1622]: time="2025-11-05T15:50:24.238787356Z" level=info msg="CreateContainer within sandbox \"f7293b6346fbac0dc21fd200338e97b4059f66f02e5545aa47c0c98b12709d48\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 15:50:24.248297 containerd[1622]: time="2025-11-05T15:50:24.248205726Z" level=info msg="Container c37565ff5917601850fb4c20e486b0d4a250a2298ebe261d52308c381194e6ac: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:24.260074 containerd[1622]: time="2025-11-05T15:50:24.260030654Z" level=info msg="CreateContainer within sandbox \"f7293b6346fbac0dc21fd200338e97b4059f66f02e5545aa47c0c98b12709d48\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c37565ff5917601850fb4c20e486b0d4a250a2298ebe261d52308c381194e6ac\"" Nov 5 15:50:24.260616 containerd[1622]: time="2025-11-05T15:50:24.260560028Z" level=info msg="StartContainer for \"c37565ff5917601850fb4c20e486b0d4a250a2298ebe261d52308c381194e6ac\"" Nov 5 15:50:24.262270 containerd[1622]: time="2025-11-05T15:50:24.262241755Z" level=info msg="connecting to shim c37565ff5917601850fb4c20e486b0d4a250a2298ebe261d52308c381194e6ac" address="unix:///run/containerd/s/9e215f528a4391f4e63b765e7c688757685385f949e940a7aa5b7b824ced1b2f" protocol=ttrpc version=3 Nov 5 15:50:24.289960 systemd[1]: Started cri-containerd-c37565ff5917601850fb4c20e486b0d4a250a2298ebe261d52308c381194e6ac.scope - libcontainer container c37565ff5917601850fb4c20e486b0d4a250a2298ebe261d52308c381194e6ac. Nov 5 15:50:24.359188 containerd[1622]: time="2025-11-05T15:50:24.359131079Z" level=info msg="StartContainer for \"c37565ff5917601850fb4c20e486b0d4a250a2298ebe261d52308c381194e6ac\" returns successfully" Nov 5 15:50:25.237995 kubelet[2812]: E1105 15:50:25.237896 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fldj" podUID="908be0d9-6b2b-4915-9d34-62f14a2dce18" Nov 5 15:50:25.333258 kubelet[2812]: E1105 15:50:25.333193 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:25.345174 kubelet[2812]: I1105 15:50:25.345083 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-557f6698f7-gw5g8" podStartSLOduration=1.7553227900000001 podStartE2EDuration="4.345062953s" podCreationTimestamp="2025-11-05 15:50:21 +0000 UTC" firstStartedPulling="2025-11-05 15:50:21.633184415 +0000 UTC m=+18.497859760" lastFinishedPulling="2025-11-05 15:50:24.222924568 +0000 UTC m=+21.087599923" observedRunningTime="2025-11-05 15:50:25.344196628 +0000 UTC m=+22.208871973" watchObservedRunningTime="2025-11-05 15:50:25.345062953 +0000 UTC m=+22.209738299" Nov 5 15:50:25.378231 kubelet[2812]: E1105 15:50:25.378142 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.378231 kubelet[2812]: W1105 15:50:25.378171 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.378231 kubelet[2812]: E1105 15:50:25.378197 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.378540 kubelet[2812]: E1105 15:50:25.378496 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.378540 kubelet[2812]: W1105 15:50:25.378506 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.378540 kubelet[2812]: E1105 15:50:25.378515 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.378799 kubelet[2812]: E1105 15:50:25.378765 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.378799 kubelet[2812]: W1105 15:50:25.378779 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.378799 kubelet[2812]: E1105 15:50:25.378788 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.379282 kubelet[2812]: E1105 15:50:25.379241 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.379282 kubelet[2812]: W1105 15:50:25.379255 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.379282 kubelet[2812]: E1105 15:50:25.379265 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.379525 kubelet[2812]: E1105 15:50:25.379486 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.379525 kubelet[2812]: W1105 15:50:25.379502 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.379525 kubelet[2812]: E1105 15:50:25.379513 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.379775 kubelet[2812]: E1105 15:50:25.379745 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.379775 kubelet[2812]: W1105 15:50:25.379760 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.379775 kubelet[2812]: E1105 15:50:25.379768 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.379978 kubelet[2812]: E1105 15:50:25.379951 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.379978 kubelet[2812]: W1105 15:50:25.379963 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.379978 kubelet[2812]: E1105 15:50:25.379972 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.380186 kubelet[2812]: E1105 15:50:25.380157 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.380186 kubelet[2812]: W1105 15:50:25.380170 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.380186 kubelet[2812]: E1105 15:50:25.380179 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.380413 kubelet[2812]: E1105 15:50:25.380380 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.380413 kubelet[2812]: W1105 15:50:25.380395 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.380413 kubelet[2812]: E1105 15:50:25.380405 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.380650 kubelet[2812]: E1105 15:50:25.380614 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.380650 kubelet[2812]: W1105 15:50:25.380625 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.380650 kubelet[2812]: E1105 15:50:25.380647 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.380851 kubelet[2812]: E1105 15:50:25.380829 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.380851 kubelet[2812]: W1105 15:50:25.380840 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.380851 kubelet[2812]: E1105 15:50:25.380848 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.381053 kubelet[2812]: E1105 15:50:25.381032 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.381053 kubelet[2812]: W1105 15:50:25.381042 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.381053 kubelet[2812]: E1105 15:50:25.381051 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.381267 kubelet[2812]: E1105 15:50:25.381240 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.381267 kubelet[2812]: W1105 15:50:25.381252 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.381267 kubelet[2812]: E1105 15:50:25.381260 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.381503 kubelet[2812]: E1105 15:50:25.381475 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.381503 kubelet[2812]: W1105 15:50:25.381488 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.381503 kubelet[2812]: E1105 15:50:25.381500 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.381750 kubelet[2812]: E1105 15:50:25.381729 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.381750 kubelet[2812]: W1105 15:50:25.381741 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.381750 kubelet[2812]: E1105 15:50:25.381750 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.394754 kubelet[2812]: E1105 15:50:25.394696 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.394754 kubelet[2812]: W1105 15:50:25.394726 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.394754 kubelet[2812]: E1105 15:50:25.394751 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.395143 kubelet[2812]: E1105 15:50:25.395088 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.395143 kubelet[2812]: W1105 15:50:25.395116 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.395143 kubelet[2812]: E1105 15:50:25.395139 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.395551 kubelet[2812]: E1105 15:50:25.395492 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.395551 kubelet[2812]: W1105 15:50:25.395514 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.395551 kubelet[2812]: E1105 15:50:25.395527 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.395894 kubelet[2812]: E1105 15:50:25.395867 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.395894 kubelet[2812]: W1105 15:50:25.395878 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.395894 kubelet[2812]: E1105 15:50:25.395892 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.396146 kubelet[2812]: E1105 15:50:25.396125 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.396146 kubelet[2812]: W1105 15:50:25.396138 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.396146 kubelet[2812]: E1105 15:50:25.396148 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.396421 kubelet[2812]: E1105 15:50:25.396372 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.396421 kubelet[2812]: W1105 15:50:25.396382 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.396421 kubelet[2812]: E1105 15:50:25.396391 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.396714 kubelet[2812]: E1105 15:50:25.396692 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.396714 kubelet[2812]: W1105 15:50:25.396707 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.396851 kubelet[2812]: E1105 15:50:25.396720 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.397004 kubelet[2812]: E1105 15:50:25.396984 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.397004 kubelet[2812]: W1105 15:50:25.397000 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.397071 kubelet[2812]: E1105 15:50:25.397013 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.397486 kubelet[2812]: E1105 15:50:25.397408 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.397486 kubelet[2812]: W1105 15:50:25.397466 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.397589 kubelet[2812]: E1105 15:50:25.397507 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.398033 kubelet[2812]: E1105 15:50:25.398002 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.398033 kubelet[2812]: W1105 15:50:25.398016 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.398033 kubelet[2812]: E1105 15:50:25.398029 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.398303 kubelet[2812]: E1105 15:50:25.398275 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.398303 kubelet[2812]: W1105 15:50:25.398290 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.398303 kubelet[2812]: E1105 15:50:25.398301 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.398613 kubelet[2812]: E1105 15:50:25.398593 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.398613 kubelet[2812]: W1105 15:50:25.398610 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.398711 kubelet[2812]: E1105 15:50:25.398623 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.398891 kubelet[2812]: E1105 15:50:25.398874 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.398891 kubelet[2812]: W1105 15:50:25.398888 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.398962 kubelet[2812]: E1105 15:50:25.398900 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.399174 kubelet[2812]: E1105 15:50:25.399149 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.399174 kubelet[2812]: W1105 15:50:25.399163 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.399174 kubelet[2812]: E1105 15:50:25.399174 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.399516 kubelet[2812]: E1105 15:50:25.399487 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.399516 kubelet[2812]: W1105 15:50:25.399506 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.399660 kubelet[2812]: E1105 15:50:25.399518 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.399933 kubelet[2812]: E1105 15:50:25.399893 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.399933 kubelet[2812]: W1105 15:50:25.399924 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.400104 kubelet[2812]: E1105 15:50:25.399959 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.400474 kubelet[2812]: E1105 15:50:25.400431 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.400474 kubelet[2812]: W1105 15:50:25.400467 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.400561 kubelet[2812]: E1105 15:50:25.400488 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.400787 kubelet[2812]: E1105 15:50:25.400766 2812 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:50:25.400787 kubelet[2812]: W1105 15:50:25.400785 2812 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:50:25.400840 kubelet[2812]: E1105 15:50:25.400800 2812 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:50:25.968363 containerd[1622]: time="2025-11-05T15:50:25.968290319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:25.969253 containerd[1622]: time="2025-11-05T15:50:25.969215024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 5 15:50:25.970786 containerd[1622]: time="2025-11-05T15:50:25.970746129Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:25.973301 containerd[1622]: time="2025-11-05T15:50:25.973259837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:25.974194 containerd[1622]: time="2025-11-05T15:50:25.974118609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.75101765s" Nov 5 15:50:25.974284 containerd[1622]: time="2025-11-05T15:50:25.974200383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 15:50:25.981239 containerd[1622]: time="2025-11-05T15:50:25.981173361Z" level=info msg="CreateContainer within sandbox \"034fc7b01bae511ee75fe30b11628b0e3d925e10c5bda6ab84e2291b9311c5d0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 15:50:25.992050 containerd[1622]: time="2025-11-05T15:50:25.991994594Z" level=info msg="Container 0ac4f0db27ca7a08a0421e2f86333da458fcdecbf5a618f5334b2f6927b37f1d: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:26.003179 containerd[1622]: time="2025-11-05T15:50:26.003125138Z" level=info msg="CreateContainer within sandbox \"034fc7b01bae511ee75fe30b11628b0e3d925e10c5bda6ab84e2291b9311c5d0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0ac4f0db27ca7a08a0421e2f86333da458fcdecbf5a618f5334b2f6927b37f1d\"" Nov 5 15:50:26.003738 containerd[1622]: time="2025-11-05T15:50:26.003700247Z" level=info msg="StartContainer for \"0ac4f0db27ca7a08a0421e2f86333da458fcdecbf5a618f5334b2f6927b37f1d\"" Nov 5 15:50:26.005606 containerd[1622]: time="2025-11-05T15:50:26.005571850Z" level=info msg="connecting to shim 0ac4f0db27ca7a08a0421e2f86333da458fcdecbf5a618f5334b2f6927b37f1d" address="unix:///run/containerd/s/81bf8cfe54fbd7fecc807da03bc33137b08c04535d13dbc0dd5948666bfc0068" protocol=ttrpc version=3 Nov 5 15:50:26.031941 systemd[1]: Started cri-containerd-0ac4f0db27ca7a08a0421e2f86333da458fcdecbf5a618f5334b2f6927b37f1d.scope - libcontainer container 0ac4f0db27ca7a08a0421e2f86333da458fcdecbf5a618f5334b2f6927b37f1d. Nov 5 15:50:26.084510 containerd[1622]: time="2025-11-05T15:50:26.084452629Z" level=info msg="StartContainer for \"0ac4f0db27ca7a08a0421e2f86333da458fcdecbf5a618f5334b2f6927b37f1d\" returns successfully" Nov 5 15:50:26.094797 systemd[1]: cri-containerd-0ac4f0db27ca7a08a0421e2f86333da458fcdecbf5a618f5334b2f6927b37f1d.scope: Deactivated successfully. Nov 5 15:50:26.097860 containerd[1622]: time="2025-11-05T15:50:26.097797317Z" level=info msg="received exit event container_id:\"0ac4f0db27ca7a08a0421e2f86333da458fcdecbf5a618f5334b2f6927b37f1d\" id:\"0ac4f0db27ca7a08a0421e2f86333da458fcdecbf5a618f5334b2f6927b37f1d\" pid:3511 exited_at:{seconds:1762357826 nanos:97286979}" Nov 5 15:50:26.098144 containerd[1622]: time="2025-11-05T15:50:26.097806865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ac4f0db27ca7a08a0421e2f86333da458fcdecbf5a618f5334b2f6927b37f1d\" id:\"0ac4f0db27ca7a08a0421e2f86333da458fcdecbf5a618f5334b2f6927b37f1d\" pid:3511 exited_at:{seconds:1762357826 nanos:97286979}" Nov 5 15:50:26.123974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ac4f0db27ca7a08a0421e2f86333da458fcdecbf5a618f5334b2f6927b37f1d-rootfs.mount: Deactivated successfully. Nov 5 15:50:26.337152 kubelet[2812]: I1105 15:50:26.337106 2812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:50:26.337732 kubelet[2812]: E1105 15:50:26.337547 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:26.337766 kubelet[2812]: E1105 15:50:26.337726 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:27.238902 kubelet[2812]: E1105 15:50:27.238747 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fldj" podUID="908be0d9-6b2b-4915-9d34-62f14a2dce18" Nov 5 15:50:27.342197 kubelet[2812]: E1105 15:50:27.342149 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:27.343716 containerd[1622]: time="2025-11-05T15:50:27.343661527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 15:50:29.237812 kubelet[2812]: E1105 15:50:29.237733 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fldj" podUID="908be0d9-6b2b-4915-9d34-62f14a2dce18" Nov 5 15:50:31.179508 containerd[1622]: time="2025-11-05T15:50:31.179302783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:31.180604 containerd[1622]: time="2025-11-05T15:50:31.180557767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 5 15:50:31.181970 containerd[1622]: time="2025-11-05T15:50:31.181925234Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:31.184292 containerd[1622]: time="2025-11-05T15:50:31.184252540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:31.184979 containerd[1622]: time="2025-11-05T15:50:31.184953005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.841243298s" Nov 5 15:50:31.185059 containerd[1622]: time="2025-11-05T15:50:31.184982931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 15:50:31.190776 containerd[1622]: time="2025-11-05T15:50:31.190595704Z" level=info msg="CreateContainer within sandbox \"034fc7b01bae511ee75fe30b11628b0e3d925e10c5bda6ab84e2291b9311c5d0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 15:50:31.205365 containerd[1622]: time="2025-11-05T15:50:31.205297552Z" level=info msg="Container 7a6fd4578e2003153d8f7d21650dd4711e5a84f1745a39a6232deb49a54a149a: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:31.216912 containerd[1622]: time="2025-11-05T15:50:31.216848428Z" level=info msg="CreateContainer within sandbox \"034fc7b01bae511ee75fe30b11628b0e3d925e10c5bda6ab84e2291b9311c5d0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7a6fd4578e2003153d8f7d21650dd4711e5a84f1745a39a6232deb49a54a149a\"" Nov 5 15:50:31.217616 containerd[1622]: time="2025-11-05T15:50:31.217413238Z" level=info msg="StartContainer for \"7a6fd4578e2003153d8f7d21650dd4711e5a84f1745a39a6232deb49a54a149a\"" Nov 5 15:50:31.219170 containerd[1622]: time="2025-11-05T15:50:31.219115523Z" level=info msg="connecting to shim 7a6fd4578e2003153d8f7d21650dd4711e5a84f1745a39a6232deb49a54a149a" address="unix:///run/containerd/s/81bf8cfe54fbd7fecc807da03bc33137b08c04535d13dbc0dd5948666bfc0068" protocol=ttrpc version=3 Nov 5 15:50:31.237588 kubelet[2812]: E1105 15:50:31.237534 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5fldj" podUID="908be0d9-6b2b-4915-9d34-62f14a2dce18" Nov 5 15:50:31.245854 systemd[1]: Started cri-containerd-7a6fd4578e2003153d8f7d21650dd4711e5a84f1745a39a6232deb49a54a149a.scope - libcontainer container 7a6fd4578e2003153d8f7d21650dd4711e5a84f1745a39a6232deb49a54a149a. Nov 5 15:50:31.304371 containerd[1622]: time="2025-11-05T15:50:31.304300185Z" level=info msg="StartContainer for \"7a6fd4578e2003153d8f7d21650dd4711e5a84f1745a39a6232deb49a54a149a\" returns successfully" Nov 5 15:50:31.355853 kubelet[2812]: E1105 15:50:31.355803 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:32.357921 kubelet[2812]: E1105 15:50:32.357875 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:32.570141 systemd[1]: cri-containerd-7a6fd4578e2003153d8f7d21650dd4711e5a84f1745a39a6232deb49a54a149a.scope: Deactivated successfully. Nov 5 15:50:32.570500 systemd[1]: cri-containerd-7a6fd4578e2003153d8f7d21650dd4711e5a84f1745a39a6232deb49a54a149a.scope: Consumed 775ms CPU time, 176.4M memory peak, 2.4M read from disk, 171.3M written to disk. Nov 5 15:50:32.596089 containerd[1622]: time="2025-11-05T15:50:32.596013310Z" level=info msg="received exit event container_id:\"7a6fd4578e2003153d8f7d21650dd4711e5a84f1745a39a6232deb49a54a149a\" id:\"7a6fd4578e2003153d8f7d21650dd4711e5a84f1745a39a6232deb49a54a149a\" pid:3570 exited_at:{seconds:1762357832 nanos:573307664}" Nov 5 15:50:32.600834 containerd[1622]: time="2025-11-05T15:50:32.600745179Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a6fd4578e2003153d8f7d21650dd4711e5a84f1745a39a6232deb49a54a149a\" id:\"7a6fd4578e2003153d8f7d21650dd4711e5a84f1745a39a6232deb49a54a149a\" pid:3570 exited_at:{seconds:1762357832 nanos:573307664}" Nov 5 15:50:32.627137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a6fd4578e2003153d8f7d21650dd4711e5a84f1745a39a6232deb49a54a149a-rootfs.mount: Deactivated successfully. Nov 5 15:50:32.682517 kubelet[2812]: I1105 15:50:32.682458 2812 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 15:50:33.005708 systemd[1]: Created slice kubepods-burstable-pod4921c0fe_4612_403c_bd27_6abad435a5f4.slice - libcontainer container kubepods-burstable-pod4921c0fe_4612_403c_bd27_6abad435a5f4.slice. Nov 5 15:50:33.019132 systemd[1]: Created slice kubepods-burstable-pod87fd8eaf_920d_486a_9257_d765234a7603.slice - libcontainer container kubepods-burstable-pod87fd8eaf_920d_486a_9257_d765234a7603.slice. Nov 5 15:50:33.037273 systemd[1]: Created slice kubepods-besteffort-pod9efbb451_21a8_4af2_826d_c29a518d9d96.slice - libcontainer container kubepods-besteffort-pod9efbb451_21a8_4af2_826d_c29a518d9d96.slice. Nov 5 15:50:33.050009 systemd[1]: Created slice kubepods-besteffort-pode24ec55b_ca98_450e_ad08_bd8f75c310ad.slice - libcontainer container kubepods-besteffort-pode24ec55b_ca98_450e_ad08_bd8f75c310ad.slice. Nov 5 15:50:33.053558 kubelet[2812]: I1105 15:50:33.053518 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e24ec55b-ca98-450e-ad08-bd8f75c310ad-calico-apiserver-certs\") pod \"calico-apiserver-698b6ffdc5-7z4nf\" (UID: \"e24ec55b-ca98-450e-ad08-bd8f75c310ad\") " pod="calico-apiserver/calico-apiserver-698b6ffdc5-7z4nf" Nov 5 15:50:33.053558 kubelet[2812]: I1105 15:50:33.053555 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt5xn\" (UniqueName: \"kubernetes.io/projected/9c0894a9-223a-4486-8e22-60b10031690d-kube-api-access-rt5xn\") pod \"whisker-5fcf7f6dbf-mj95g\" (UID: \"9c0894a9-223a-4486-8e22-60b10031690d\") " pod="calico-system/whisker-5fcf7f6dbf-mj95g" Nov 5 15:50:33.053705 kubelet[2812]: I1105 15:50:33.053578 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtlfc\" (UniqueName: \"kubernetes.io/projected/3c6882bb-0885-494e-b1d2-fd2e09cf28b1-kube-api-access-jtlfc\") pod \"goldmane-666569f655-lh7wv\" (UID: \"3c6882bb-0885-494e-b1d2-fd2e09cf28b1\") " pod="calico-system/goldmane-666569f655-lh7wv" Nov 5 15:50:33.053705 kubelet[2812]: I1105 15:50:33.053598 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vbvb\" (UniqueName: \"kubernetes.io/projected/4921c0fe-4612-403c-bd27-6abad435a5f4-kube-api-access-8vbvb\") pod \"coredns-674b8bbfcf-whfpd\" (UID: \"4921c0fe-4612-403c-bd27-6abad435a5f4\") " pod="kube-system/coredns-674b8bbfcf-whfpd" Nov 5 15:50:33.053705 kubelet[2812]: I1105 15:50:33.053617 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5bmc\" (UniqueName: \"kubernetes.io/projected/9efbb451-21a8-4af2-826d-c29a518d9d96-kube-api-access-m5bmc\") pod \"calico-kube-controllers-dc69ccdf-27hbr\" (UID: \"9efbb451-21a8-4af2-826d-c29a518d9d96\") " pod="calico-system/calico-kube-controllers-dc69ccdf-27hbr" Nov 5 15:50:33.053705 kubelet[2812]: I1105 15:50:33.053660 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fxw9\" (UniqueName: \"kubernetes.io/projected/87fd8eaf-920d-486a-9257-d765234a7603-kube-api-access-4fxw9\") pod \"coredns-674b8bbfcf-nqspw\" (UID: \"87fd8eaf-920d-486a-9257-d765234a7603\") " pod="kube-system/coredns-674b8bbfcf-nqspw" Nov 5 15:50:33.053705 kubelet[2812]: I1105 15:50:33.053680 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c6882bb-0885-494e-b1d2-fd2e09cf28b1-goldmane-ca-bundle\") pod \"goldmane-666569f655-lh7wv\" (UID: \"3c6882bb-0885-494e-b1d2-fd2e09cf28b1\") " pod="calico-system/goldmane-666569f655-lh7wv" Nov 5 15:50:33.053840 kubelet[2812]: I1105 15:50:33.053699 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c0894a9-223a-4486-8e22-60b10031690d-whisker-ca-bundle\") pod \"whisker-5fcf7f6dbf-mj95g\" (UID: \"9c0894a9-223a-4486-8e22-60b10031690d\") " pod="calico-system/whisker-5fcf7f6dbf-mj95g" Nov 5 15:50:33.053840 kubelet[2812]: I1105 15:50:33.053718 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jnlq\" (UniqueName: \"kubernetes.io/projected/ace081d4-c73d-4d8d-b64e-ba5786790ea2-kube-api-access-9jnlq\") pod \"calico-apiserver-698b6ffdc5-9kwmw\" (UID: \"ace081d4-c73d-4d8d-b64e-ba5786790ea2\") " pod="calico-apiserver/calico-apiserver-698b6ffdc5-9kwmw" Nov 5 15:50:33.053840 kubelet[2812]: I1105 15:50:33.053739 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ffgc\" (UniqueName: \"kubernetes.io/projected/e24ec55b-ca98-450e-ad08-bd8f75c310ad-kube-api-access-7ffgc\") pod \"calico-apiserver-698b6ffdc5-7z4nf\" (UID: \"e24ec55b-ca98-450e-ad08-bd8f75c310ad\") " pod="calico-apiserver/calico-apiserver-698b6ffdc5-7z4nf" Nov 5 15:50:33.053840 kubelet[2812]: I1105 15:50:33.053760 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ace081d4-c73d-4d8d-b64e-ba5786790ea2-calico-apiserver-certs\") pod \"calico-apiserver-698b6ffdc5-9kwmw\" (UID: \"ace081d4-c73d-4d8d-b64e-ba5786790ea2\") " pod="calico-apiserver/calico-apiserver-698b6ffdc5-9kwmw" Nov 5 15:50:33.053840 kubelet[2812]: I1105 15:50:33.053779 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3c6882bb-0885-494e-b1d2-fd2e09cf28b1-goldmane-key-pair\") pod \"goldmane-666569f655-lh7wv\" (UID: \"3c6882bb-0885-494e-b1d2-fd2e09cf28b1\") " pod="calico-system/goldmane-666569f655-lh7wv" Nov 5 15:50:33.053962 kubelet[2812]: I1105 15:50:33.053797 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9efbb451-21a8-4af2-826d-c29a518d9d96-tigera-ca-bundle\") pod \"calico-kube-controllers-dc69ccdf-27hbr\" (UID: \"9efbb451-21a8-4af2-826d-c29a518d9d96\") " pod="calico-system/calico-kube-controllers-dc69ccdf-27hbr" Nov 5 15:50:33.053962 kubelet[2812]: I1105 15:50:33.053816 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fd8eaf-920d-486a-9257-d765234a7603-config-volume\") pod \"coredns-674b8bbfcf-nqspw\" (UID: \"87fd8eaf-920d-486a-9257-d765234a7603\") " pod="kube-system/coredns-674b8bbfcf-nqspw" Nov 5 15:50:33.053962 kubelet[2812]: I1105 15:50:33.053833 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4921c0fe-4612-403c-bd27-6abad435a5f4-config-volume\") pod \"coredns-674b8bbfcf-whfpd\" (UID: \"4921c0fe-4612-403c-bd27-6abad435a5f4\") " pod="kube-system/coredns-674b8bbfcf-whfpd" Nov 5 15:50:33.053962 kubelet[2812]: I1105 15:50:33.053851 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c0894a9-223a-4486-8e22-60b10031690d-whisker-backend-key-pair\") pod \"whisker-5fcf7f6dbf-mj95g\" (UID: \"9c0894a9-223a-4486-8e22-60b10031690d\") " pod="calico-system/whisker-5fcf7f6dbf-mj95g" Nov 5 15:50:33.053962 kubelet[2812]: I1105 15:50:33.053874 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6882bb-0885-494e-b1d2-fd2e09cf28b1-config\") pod \"goldmane-666569f655-lh7wv\" (UID: \"3c6882bb-0885-494e-b1d2-fd2e09cf28b1\") " pod="calico-system/goldmane-666569f655-lh7wv" Nov 5 15:50:33.057461 systemd[1]: Created slice kubepods-besteffort-pod3c6882bb_0885_494e_b1d2_fd2e09cf28b1.slice - libcontainer container kubepods-besteffort-pod3c6882bb_0885_494e_b1d2_fd2e09cf28b1.slice. Nov 5 15:50:33.061363 systemd[1]: Created slice kubepods-besteffort-podace081d4_c73d_4d8d_b64e_ba5786790ea2.slice - libcontainer container kubepods-besteffort-podace081d4_c73d_4d8d_b64e_ba5786790ea2.slice. Nov 5 15:50:33.065894 systemd[1]: Created slice kubepods-besteffort-pod9c0894a9_223a_4486_8e22_60b10031690d.slice - libcontainer container kubepods-besteffort-pod9c0894a9_223a_4486_8e22_60b10031690d.slice. Nov 5 15:50:33.252024 systemd[1]: Created slice kubepods-besteffort-pod908be0d9_6b2b_4915_9d34_62f14a2dce18.slice - libcontainer container kubepods-besteffort-pod908be0d9_6b2b_4915_9d34_62f14a2dce18.slice. Nov 5 15:50:33.254406 containerd[1622]: time="2025-11-05T15:50:33.254367606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5fldj,Uid:908be0d9-6b2b-4915-9d34-62f14a2dce18,Namespace:calico-system,Attempt:0,}" Nov 5 15:50:33.314985 kubelet[2812]: E1105 15:50:33.314072 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:33.315565 containerd[1622]: time="2025-11-05T15:50:33.315535883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-whfpd,Uid:4921c0fe-4612-403c-bd27-6abad435a5f4,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:33.331892 kubelet[2812]: E1105 15:50:33.331836 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:33.333995 containerd[1622]: time="2025-11-05T15:50:33.333937952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nqspw,Uid:87fd8eaf-920d-486a-9257-d765234a7603,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:33.344717 containerd[1622]: time="2025-11-05T15:50:33.344670681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc69ccdf-27hbr,Uid:9efbb451-21a8-4af2-826d-c29a518d9d96,Namespace:calico-system,Attempt:0,}" Nov 5 15:50:33.370980 kubelet[2812]: E1105 15:50:33.370809 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:33.409653 containerd[1622]: time="2025-11-05T15:50:33.355142300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6ffdc5-7z4nf,Uid:e24ec55b-ca98-450e-ad08-bd8f75c310ad,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:50:33.409831 containerd[1622]: time="2025-11-05T15:50:33.360111575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lh7wv,Uid:3c6882bb-0885-494e-b1d2-fd2e09cf28b1,Namespace:calico-system,Attempt:0,}" Nov 5 15:50:33.409877 containerd[1622]: time="2025-11-05T15:50:33.369481355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcf7f6dbf-mj95g,Uid:9c0894a9-223a-4486-8e22-60b10031690d,Namespace:calico-system,Attempt:0,}" Nov 5 15:50:33.409924 containerd[1622]: time="2025-11-05T15:50:33.370253054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6ffdc5-9kwmw,Uid:ace081d4-c73d-4d8d-b64e-ba5786790ea2,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:50:33.409962 containerd[1622]: time="2025-11-05T15:50:33.371697575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 15:50:33.493370 containerd[1622]: time="2025-11-05T15:50:33.493183451Z" level=error msg="Failed to destroy network for sandbox \"20431173b4fff41f5326a51cc2041eb49dd1d3a218c45feafd643dfb911a5f3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.538779 containerd[1622]: time="2025-11-05T15:50:33.538672717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5fldj,Uid:908be0d9-6b2b-4915-9d34-62f14a2dce18,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20431173b4fff41f5326a51cc2041eb49dd1d3a218c45feafd643dfb911a5f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.543294 kubelet[2812]: E1105 15:50:33.543114 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20431173b4fff41f5326a51cc2041eb49dd1d3a218c45feafd643dfb911a5f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.543891 kubelet[2812]: E1105 15:50:33.543731 2812 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20431173b4fff41f5326a51cc2041eb49dd1d3a218c45feafd643dfb911a5f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5fldj" Nov 5 15:50:33.544053 kubelet[2812]: E1105 15:50:33.544021 2812 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20431173b4fff41f5326a51cc2041eb49dd1d3a218c45feafd643dfb911a5f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5fldj" Nov 5 15:50:33.544950 kubelet[2812]: E1105 15:50:33.544886 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5fldj_calico-system(908be0d9-6b2b-4915-9d34-62f14a2dce18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5fldj_calico-system(908be0d9-6b2b-4915-9d34-62f14a2dce18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20431173b4fff41f5326a51cc2041eb49dd1d3a218c45feafd643dfb911a5f3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5fldj" podUID="908be0d9-6b2b-4915-9d34-62f14a2dce18" Nov 5 15:50:33.565173 containerd[1622]: time="2025-11-05T15:50:33.564943191Z" level=error msg="Failed to destroy network for sandbox \"5b3864a6c6191e16ea9a03b47bf3f71c0dcf33fcb334f5029a02b794126d8e0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.571444 containerd[1622]: time="2025-11-05T15:50:33.571249364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nqspw,Uid:87fd8eaf-920d-486a-9257-d765234a7603,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3864a6c6191e16ea9a03b47bf3f71c0dcf33fcb334f5029a02b794126d8e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.571704 kubelet[2812]: E1105 15:50:33.571613 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3864a6c6191e16ea9a03b47bf3f71c0dcf33fcb334f5029a02b794126d8e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.571777 kubelet[2812]: E1105 15:50:33.571747 2812 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3864a6c6191e16ea9a03b47bf3f71c0dcf33fcb334f5029a02b794126d8e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nqspw" Nov 5 15:50:33.571878 kubelet[2812]: E1105 15:50:33.571846 2812 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3864a6c6191e16ea9a03b47bf3f71c0dcf33fcb334f5029a02b794126d8e0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nqspw" Nov 5 15:50:33.572095 kubelet[2812]: E1105 15:50:33.572037 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nqspw_kube-system(87fd8eaf-920d-486a-9257-d765234a7603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nqspw_kube-system(87fd8eaf-920d-486a-9257-d765234a7603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b3864a6c6191e16ea9a03b47bf3f71c0dcf33fcb334f5029a02b794126d8e0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nqspw" podUID="87fd8eaf-920d-486a-9257-d765234a7603" Nov 5 15:50:33.587095 containerd[1622]: time="2025-11-05T15:50:33.586860717Z" level=error msg="Failed to destroy network for sandbox \"2442535f758f840d9fea88406d7f3c9e03759926583ed1cf36e51fea8d553924\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.588571 containerd[1622]: time="2025-11-05T15:50:33.588525711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-whfpd,Uid:4921c0fe-4612-403c-bd27-6abad435a5f4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2442535f758f840d9fea88406d7f3c9e03759926583ed1cf36e51fea8d553924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.589120 kubelet[2812]: E1105 15:50:33.589055 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2442535f758f840d9fea88406d7f3c9e03759926583ed1cf36e51fea8d553924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.589602 kubelet[2812]: E1105 15:50:33.589477 2812 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2442535f758f840d9fea88406d7f3c9e03759926583ed1cf36e51fea8d553924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-whfpd" Nov 5 15:50:33.589602 kubelet[2812]: E1105 15:50:33.589546 2812 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2442535f758f840d9fea88406d7f3c9e03759926583ed1cf36e51fea8d553924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-whfpd" Nov 5 15:50:33.589925 kubelet[2812]: E1105 15:50:33.589865 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-whfpd_kube-system(4921c0fe-4612-403c-bd27-6abad435a5f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-whfpd_kube-system(4921c0fe-4612-403c-bd27-6abad435a5f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2442535f758f840d9fea88406d7f3c9e03759926583ed1cf36e51fea8d553924\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-whfpd" podUID="4921c0fe-4612-403c-bd27-6abad435a5f4" Nov 5 15:50:33.667031 containerd[1622]: time="2025-11-05T15:50:33.666917766Z" level=error msg="Failed to destroy network for sandbox \"4c8acd3b7c2479254fd688c1136d371254ad129c92df464ae2d3a4adbd6e926b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.671831 systemd[1]: run-netns-cni\x2d3d001304\x2d8f31\x2d888f\x2ddffa\x2d4018a6de4def.mount: Deactivated successfully. Nov 5 15:50:33.676134 kubelet[2812]: E1105 15:50:33.672202 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8acd3b7c2479254fd688c1136d371254ad129c92df464ae2d3a4adbd6e926b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.676134 kubelet[2812]: E1105 15:50:33.672384 2812 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8acd3b7c2479254fd688c1136d371254ad129c92df464ae2d3a4adbd6e926b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698b6ffdc5-7z4nf" Nov 5 15:50:33.676134 kubelet[2812]: E1105 15:50:33.672441 2812 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8acd3b7c2479254fd688c1136d371254ad129c92df464ae2d3a4adbd6e926b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698b6ffdc5-7z4nf" Nov 5 15:50:33.676323 containerd[1622]: time="2025-11-05T15:50:33.671821197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6ffdc5-7z4nf,Uid:e24ec55b-ca98-450e-ad08-bd8f75c310ad,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8acd3b7c2479254fd688c1136d371254ad129c92df464ae2d3a4adbd6e926b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.676323 containerd[1622]: time="2025-11-05T15:50:33.672097786Z" level=error msg="Failed to destroy network for sandbox \"2ed94efec02b0b5c7b0fdcb6572a3d5573dc9aef77a8e1c81618e1bebca679fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.676323 containerd[1622]: time="2025-11-05T15:50:33.675718730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6ffdc5-9kwmw,Uid:ace081d4-c73d-4d8d-b64e-ba5786790ea2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ed94efec02b0b5c7b0fdcb6572a3d5573dc9aef77a8e1c81618e1bebca679fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.675230 systemd[1]: run-netns-cni\x2d8d081bd2\x2dda7a\x2d694c\x2d981f\x2d03b7aae4abb0.mount: Deactivated successfully. Nov 5 15:50:33.676533 kubelet[2812]: E1105 15:50:33.672731 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-698b6ffdc5-7z4nf_calico-apiserver(e24ec55b-ca98-450e-ad08-bd8f75c310ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-698b6ffdc5-7z4nf_calico-apiserver(e24ec55b-ca98-450e-ad08-bd8f75c310ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c8acd3b7c2479254fd688c1136d371254ad129c92df464ae2d3a4adbd6e926b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-7z4nf" podUID="e24ec55b-ca98-450e-ad08-bd8f75c310ad" Nov 5 15:50:33.676533 kubelet[2812]: E1105 15:50:33.676006 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ed94efec02b0b5c7b0fdcb6572a3d5573dc9aef77a8e1c81618e1bebca679fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.676533 kubelet[2812]: E1105 15:50:33.676089 2812 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ed94efec02b0b5c7b0fdcb6572a3d5573dc9aef77a8e1c81618e1bebca679fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698b6ffdc5-9kwmw" Nov 5 15:50:33.678749 kubelet[2812]: E1105 15:50:33.676120 2812 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ed94efec02b0b5c7b0fdcb6572a3d5573dc9aef77a8e1c81618e1bebca679fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698b6ffdc5-9kwmw" Nov 5 15:50:33.678749 kubelet[2812]: E1105 15:50:33.676181 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-698b6ffdc5-9kwmw_calico-apiserver(ace081d4-c73d-4d8d-b64e-ba5786790ea2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-698b6ffdc5-9kwmw_calico-apiserver(ace081d4-c73d-4d8d-b64e-ba5786790ea2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ed94efec02b0b5c7b0fdcb6572a3d5573dc9aef77a8e1c81618e1bebca679fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-9kwmw" podUID="ace081d4-c73d-4d8d-b64e-ba5786790ea2" Nov 5 15:50:33.686747 containerd[1622]: time="2025-11-05T15:50:33.686656775Z" level=error msg="Failed to destroy network for sandbox \"7c01e17ecde2aa482c3ea8c596f734371c8040fcc1bd0c3b8c368ea9792468b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.690593 systemd[1]: run-netns-cni\x2d8946c148\x2d718c\x2d500b\x2d40fd\x2da9a299511e7a.mount: Deactivated successfully. Nov 5 15:50:33.693894 containerd[1622]: time="2025-11-05T15:50:33.693758079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc69ccdf-27hbr,Uid:9efbb451-21a8-4af2-826d-c29a518d9d96,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c01e17ecde2aa482c3ea8c596f734371c8040fcc1bd0c3b8c368ea9792468b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.694656 kubelet[2812]: E1105 15:50:33.694405 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c01e17ecde2aa482c3ea8c596f734371c8040fcc1bd0c3b8c368ea9792468b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.694656 kubelet[2812]: E1105 15:50:33.694499 2812 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c01e17ecde2aa482c3ea8c596f734371c8040fcc1bd0c3b8c368ea9792468b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dc69ccdf-27hbr" Nov 5 15:50:33.694656 kubelet[2812]: E1105 15:50:33.694520 2812 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c01e17ecde2aa482c3ea8c596f734371c8040fcc1bd0c3b8c368ea9792468b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dc69ccdf-27hbr" Nov 5 15:50:33.694791 kubelet[2812]: E1105 15:50:33.694572 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dc69ccdf-27hbr_calico-system(9efbb451-21a8-4af2-826d-c29a518d9d96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dc69ccdf-27hbr_calico-system(9efbb451-21a8-4af2-826d-c29a518d9d96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c01e17ecde2aa482c3ea8c596f734371c8040fcc1bd0c3b8c368ea9792468b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dc69ccdf-27hbr" podUID="9efbb451-21a8-4af2-826d-c29a518d9d96" Nov 5 15:50:33.699795 containerd[1622]: time="2025-11-05T15:50:33.699714656Z" level=error msg="Failed to destroy network for sandbox \"c19f0329aa2ffa6d64d70c7a6b8aba676a30a686d48e896ae82286ff3882de94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.703161 systemd[1]: run-netns-cni\x2d22019029\x2d9eba\x2de5c7\x2dd12e\x2d27fdaf4c7943.mount: Deactivated successfully. Nov 5 15:50:33.704744 containerd[1622]: time="2025-11-05T15:50:33.704586157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcf7f6dbf-mj95g,Uid:9c0894a9-223a-4486-8e22-60b10031690d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19f0329aa2ffa6d64d70c7a6b8aba676a30a686d48e896ae82286ff3882de94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.704971 kubelet[2812]: E1105 15:50:33.704925 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19f0329aa2ffa6d64d70c7a6b8aba676a30a686d48e896ae82286ff3882de94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.705029 kubelet[2812]: E1105 15:50:33.705000 2812 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19f0329aa2ffa6d64d70c7a6b8aba676a30a686d48e896ae82286ff3882de94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fcf7f6dbf-mj95g" Nov 5 15:50:33.705068 kubelet[2812]: E1105 15:50:33.705027 2812 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19f0329aa2ffa6d64d70c7a6b8aba676a30a686d48e896ae82286ff3882de94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fcf7f6dbf-mj95g" Nov 5 15:50:33.705118 kubelet[2812]: E1105 15:50:33.705089 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5fcf7f6dbf-mj95g_calico-system(9c0894a9-223a-4486-8e22-60b10031690d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5fcf7f6dbf-mj95g_calico-system(9c0894a9-223a-4486-8e22-60b10031690d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c19f0329aa2ffa6d64d70c7a6b8aba676a30a686d48e896ae82286ff3882de94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fcf7f6dbf-mj95g" podUID="9c0894a9-223a-4486-8e22-60b10031690d" Nov 5 15:50:33.707923 containerd[1622]: time="2025-11-05T15:50:33.707879337Z" level=error msg="Failed to destroy network for sandbox \"fed5350dc34f5c87005602496b9a459a55f4e34bb1363ae168406162966fb707\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.709794 containerd[1622]: time="2025-11-05T15:50:33.709740279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lh7wv,Uid:3c6882bb-0885-494e-b1d2-fd2e09cf28b1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed5350dc34f5c87005602496b9a459a55f4e34bb1363ae168406162966fb707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.710069 kubelet[2812]: E1105 15:50:33.710023 2812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed5350dc34f5c87005602496b9a459a55f4e34bb1363ae168406162966fb707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:50:33.710148 kubelet[2812]: E1105 15:50:33.710094 2812 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed5350dc34f5c87005602496b9a459a55f4e34bb1363ae168406162966fb707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lh7wv" Nov 5 15:50:33.710148 kubelet[2812]: E1105 15:50:33.710122 2812 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed5350dc34f5c87005602496b9a459a55f4e34bb1363ae168406162966fb707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lh7wv" Nov 5 15:50:33.710244 kubelet[2812]: E1105 15:50:33.710186 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-lh7wv_calico-system(3c6882bb-0885-494e-b1d2-fd2e09cf28b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-lh7wv_calico-system(3c6882bb-0885-494e-b1d2-fd2e09cf28b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fed5350dc34f5c87005602496b9a459a55f4e34bb1363ae168406162966fb707\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-lh7wv" podUID="3c6882bb-0885-494e-b1d2-fd2e09cf28b1" Nov 5 15:50:34.626890 systemd[1]: run-netns-cni\x2d611ef9d6\x2dfe79\x2dad44\x2dee3e\x2d5b7f57ef7995.mount: Deactivated successfully. Nov 5 15:50:40.368170 kubelet[2812]: I1105 15:50:40.368103 2812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:50:40.369383 kubelet[2812]: E1105 15:50:40.369345 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:40.387291 kubelet[2812]: E1105 15:50:40.387248 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:42.616302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1776517608.mount: Deactivated successfully. Nov 5 15:50:43.821406 containerd[1622]: time="2025-11-05T15:50:43.821331694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:43.823730 containerd[1622]: time="2025-11-05T15:50:43.823650097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 5 15:50:43.828194 containerd[1622]: time="2025-11-05T15:50:43.828137965Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.418181712s" Nov 5 15:50:43.828194 containerd[1622]: time="2025-11-05T15:50:43.828173474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 15:50:43.830652 containerd[1622]: time="2025-11-05T15:50:43.830586610Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:43.831266 containerd[1622]: time="2025-11-05T15:50:43.831199546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:43.853276 containerd[1622]: time="2025-11-05T15:50:43.853196153Z" level=info msg="CreateContainer within sandbox \"034fc7b01bae511ee75fe30b11628b0e3d925e10c5bda6ab84e2291b9311c5d0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 15:50:43.866742 containerd[1622]: time="2025-11-05T15:50:43.866678905Z" level=info msg="Container c2487af145ee0191bd4e69f85bc46ecb4b1fb66275b49fc5ea3643b170e8bd3b: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:43.880658 containerd[1622]: time="2025-11-05T15:50:43.880568164Z" level=info msg="CreateContainer within sandbox \"034fc7b01bae511ee75fe30b11628b0e3d925e10c5bda6ab84e2291b9311c5d0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c2487af145ee0191bd4e69f85bc46ecb4b1fb66275b49fc5ea3643b170e8bd3b\"" Nov 5 15:50:43.881301 containerd[1622]: time="2025-11-05T15:50:43.881219724Z" level=info msg="StartContainer for \"c2487af145ee0191bd4e69f85bc46ecb4b1fb66275b49fc5ea3643b170e8bd3b\"" Nov 5 15:50:43.883185 containerd[1622]: time="2025-11-05T15:50:43.883150888Z" level=info msg="connecting to shim c2487af145ee0191bd4e69f85bc46ecb4b1fb66275b49fc5ea3643b170e8bd3b" address="unix:///run/containerd/s/81bf8cfe54fbd7fecc807da03bc33137b08c04535d13dbc0dd5948666bfc0068" protocol=ttrpc version=3 Nov 5 15:50:43.965823 systemd[1]: Started cri-containerd-c2487af145ee0191bd4e69f85bc46ecb4b1fb66275b49fc5ea3643b170e8bd3b.scope - libcontainer container c2487af145ee0191bd4e69f85bc46ecb4b1fb66275b49fc5ea3643b170e8bd3b. Nov 5 15:50:44.087571 containerd[1622]: time="2025-11-05T15:50:44.087348722Z" level=info msg="StartContainer for \"c2487af145ee0191bd4e69f85bc46ecb4b1fb66275b49fc5ea3643b170e8bd3b\" returns successfully" Nov 5 15:50:44.175521 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 15:50:44.176616 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 15:50:44.408788 kubelet[2812]: E1105 15:50:44.407758 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:44.542348 kubelet[2812]: I1105 15:50:44.542298 2812 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt5xn\" (UniqueName: \"kubernetes.io/projected/9c0894a9-223a-4486-8e22-60b10031690d-kube-api-access-rt5xn\") pod \"9c0894a9-223a-4486-8e22-60b10031690d\" (UID: \"9c0894a9-223a-4486-8e22-60b10031690d\") " Nov 5 15:50:44.542348 kubelet[2812]: I1105 15:50:44.542358 2812 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c0894a9-223a-4486-8e22-60b10031690d-whisker-backend-key-pair\") pod \"9c0894a9-223a-4486-8e22-60b10031690d\" (UID: \"9c0894a9-223a-4486-8e22-60b10031690d\") " Nov 5 15:50:44.542552 kubelet[2812]: I1105 15:50:44.542383 2812 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c0894a9-223a-4486-8e22-60b10031690d-whisker-ca-bundle\") pod \"9c0894a9-223a-4486-8e22-60b10031690d\" (UID: \"9c0894a9-223a-4486-8e22-60b10031690d\") " Nov 5 15:50:44.546077 kubelet[2812]: I1105 15:50:44.545940 2812 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0894a9-223a-4486-8e22-60b10031690d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9c0894a9-223a-4486-8e22-60b10031690d" (UID: "9c0894a9-223a-4486-8e22-60b10031690d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:50:44.550186 kubelet[2812]: I1105 15:50:44.550132 2812 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0894a9-223a-4486-8e22-60b10031690d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9c0894a9-223a-4486-8e22-60b10031690d" (UID: "9c0894a9-223a-4486-8e22-60b10031690d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:50:44.550410 kubelet[2812]: I1105 15:50:44.550388 2812 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c0894a9-223a-4486-8e22-60b10031690d-kube-api-access-rt5xn" (OuterVolumeSpecName: "kube-api-access-rt5xn") pod "9c0894a9-223a-4486-8e22-60b10031690d" (UID: "9c0894a9-223a-4486-8e22-60b10031690d"). InnerVolumeSpecName "kube-api-access-rt5xn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:50:44.590176 containerd[1622]: time="2025-11-05T15:50:44.590118803Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2487af145ee0191bd4e69f85bc46ecb4b1fb66275b49fc5ea3643b170e8bd3b\" id:\"92ecb2d756156abb0808e83a798de77b146cd6507315b1f1913a8f6abe8b6df2\" pid:3952 exit_status:1 exited_at:{seconds:1762357844 nanos:589525807}" Nov 5 15:50:44.643199 kubelet[2812]: I1105 15:50:44.643129 2812 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c0894a9-223a-4486-8e22-60b10031690d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 5 15:50:44.643199 kubelet[2812]: I1105 15:50:44.643174 2812 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rt5xn\" (UniqueName: \"kubernetes.io/projected/9c0894a9-223a-4486-8e22-60b10031690d-kube-api-access-rt5xn\") on node \"localhost\" DevicePath \"\"" Nov 5 15:50:44.643199 kubelet[2812]: I1105 15:50:44.643185 2812 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c0894a9-223a-4486-8e22-60b10031690d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 5 15:50:44.718839 systemd[1]: Removed slice kubepods-besteffort-pod9c0894a9_223a_4486_8e22_60b10031690d.slice - libcontainer container kubepods-besteffort-pod9c0894a9_223a_4486_8e22_60b10031690d.slice. Nov 5 15:50:44.733470 kubelet[2812]: I1105 15:50:44.733366 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gft9w" podStartSLOduration=1.6821361019999999 podStartE2EDuration="23.733346854s" podCreationTimestamp="2025-11-05 15:50:21 +0000 UTC" firstStartedPulling="2025-11-05 15:50:21.779729261 +0000 UTC m=+18.644404607" lastFinishedPulling="2025-11-05 15:50:43.830940004 +0000 UTC m=+40.695615359" observedRunningTime="2025-11-05 15:50:44.447221109 +0000 UTC m=+41.311896444" watchObservedRunningTime="2025-11-05 15:50:44.733346854 +0000 UTC m=+41.598022199" Nov 5 15:50:44.787458 systemd[1]: Created slice kubepods-besteffort-pod6f1cb5a0_7db4_483a_9554_eea8e26ca91e.slice - libcontainer container kubepods-besteffort-pod6f1cb5a0_7db4_483a_9554_eea8e26ca91e.slice. Nov 5 15:50:44.837506 systemd[1]: var-lib-kubelet-pods-9c0894a9\x2d223a\x2d4486\x2d8e22\x2d60b10031690d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drt5xn.mount: Deactivated successfully. Nov 5 15:50:44.837704 systemd[1]: var-lib-kubelet-pods-9c0894a9\x2d223a\x2d4486\x2d8e22\x2d60b10031690d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 15:50:44.844415 kubelet[2812]: I1105 15:50:44.844369 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6f1cb5a0-7db4-483a-9554-eea8e26ca91e-whisker-backend-key-pair\") pod \"whisker-86c567bcd9-r5jn5\" (UID: \"6f1cb5a0-7db4-483a-9554-eea8e26ca91e\") " pod="calico-system/whisker-86c567bcd9-r5jn5" Nov 5 15:50:44.844415 kubelet[2812]: I1105 15:50:44.844414 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmkwk\" (UniqueName: \"kubernetes.io/projected/6f1cb5a0-7db4-483a-9554-eea8e26ca91e-kube-api-access-vmkwk\") pod \"whisker-86c567bcd9-r5jn5\" (UID: \"6f1cb5a0-7db4-483a-9554-eea8e26ca91e\") " pod="calico-system/whisker-86c567bcd9-r5jn5" Nov 5 15:50:44.844741 kubelet[2812]: I1105 15:50:44.844486 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f1cb5a0-7db4-483a-9554-eea8e26ca91e-whisker-ca-bundle\") pod \"whisker-86c567bcd9-r5jn5\" (UID: \"6f1cb5a0-7db4-483a-9554-eea8e26ca91e\") " pod="calico-system/whisker-86c567bcd9-r5jn5" Nov 5 15:50:45.092049 containerd[1622]: time="2025-11-05T15:50:45.091921362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86c567bcd9-r5jn5,Uid:6f1cb5a0-7db4-483a-9554-eea8e26ca91e,Namespace:calico-system,Attempt:0,}" Nov 5 15:50:45.238370 kubelet[2812]: E1105 15:50:45.238152 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:45.240393 containerd[1622]: time="2025-11-05T15:50:45.239844206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nqspw,Uid:87fd8eaf-920d-486a-9257-d765234a7603,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:45.243076 kubelet[2812]: I1105 15:50:45.243038 2812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c0894a9-223a-4486-8e22-60b10031690d" path="/var/lib/kubelet/pods/9c0894a9-223a-4486-8e22-60b10031690d/volumes" Nov 5 15:50:45.270241 systemd-networkd[1509]: calidc7bf1be8c2: Link UP Nov 5 15:50:45.271353 systemd-networkd[1509]: calidc7bf1be8c2: Gained carrier Nov 5 15:50:45.292210 containerd[1622]: 2025-11-05 15:50:45.121 [INFO][3980] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:50:45.292210 containerd[1622]: 2025-11-05 15:50:45.144 [INFO][3980] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--86c567bcd9--r5jn5-eth0 whisker-86c567bcd9- calico-system 6f1cb5a0-7db4-483a-9554-eea8e26ca91e 931 0 2025-11-05 15:50:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86c567bcd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-86c567bcd9-r5jn5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidc7bf1be8c2 [] [] }} ContainerID="1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" Namespace="calico-system" Pod="whisker-86c567bcd9-r5jn5" WorkloadEndpoint="localhost-k8s-whisker--86c567bcd9--r5jn5-" Nov 5 15:50:45.292210 containerd[1622]: 2025-11-05 15:50:45.144 [INFO][3980] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" Namespace="calico-system" Pod="whisker-86c567bcd9-r5jn5" WorkloadEndpoint="localhost-k8s-whisker--86c567bcd9--r5jn5-eth0" Nov 5 15:50:45.292210 containerd[1622]: 2025-11-05 15:50:45.217 [INFO][3994] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" HandleID="k8s-pod-network.1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" Workload="localhost-k8s-whisker--86c567bcd9--r5jn5-eth0" Nov 5 15:50:45.292461 containerd[1622]: 2025-11-05 15:50:45.218 [INFO][3994] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" HandleID="k8s-pod-network.1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" Workload="localhost-k8s-whisker--86c567bcd9--r5jn5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bfb40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-86c567bcd9-r5jn5", "timestamp":"2025-11-05 15:50:45.217228934 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:50:45.292461 containerd[1622]: 2025-11-05 15:50:45.218 [INFO][3994] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:50:45.292461 containerd[1622]: 2025-11-05 15:50:45.218 [INFO][3994] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:50:45.292461 containerd[1622]: 2025-11-05 15:50:45.218 [INFO][3994] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:50:45.292461 containerd[1622]: 2025-11-05 15:50:45.227 [INFO][3994] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" host="localhost" Nov 5 15:50:45.292461 containerd[1622]: 2025-11-05 15:50:45.233 [INFO][3994] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:50:45.292461 containerd[1622]: 2025-11-05 15:50:45.238 [INFO][3994] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:50:45.292461 containerd[1622]: 2025-11-05 15:50:45.240 [INFO][3994] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:45.292461 containerd[1622]: 2025-11-05 15:50:45.243 [INFO][3994] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:45.292461 containerd[1622]: 2025-11-05 15:50:45.243 [INFO][3994] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" host="localhost" Nov 5 15:50:45.292739 containerd[1622]: 2025-11-05 15:50:45.247 [INFO][3994] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc Nov 5 15:50:45.292739 containerd[1622]: 2025-11-05 15:50:45.253 [INFO][3994] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" host="localhost" Nov 5 15:50:45.292739 containerd[1622]: 2025-11-05 15:50:45.258 [INFO][3994] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" host="localhost" Nov 5 15:50:45.292739 containerd[1622]: 2025-11-05 15:50:45.258 [INFO][3994] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" host="localhost" Nov 5 15:50:45.292739 containerd[1622]: 2025-11-05 15:50:45.258 [INFO][3994] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:50:45.292739 containerd[1622]: 2025-11-05 15:50:45.258 [INFO][3994] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" HandleID="k8s-pod-network.1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" Workload="localhost-k8s-whisker--86c567bcd9--r5jn5-eth0" Nov 5 15:50:45.292860 containerd[1622]: 2025-11-05 15:50:45.261 [INFO][3980] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" Namespace="calico-system" Pod="whisker-86c567bcd9-r5jn5" WorkloadEndpoint="localhost-k8s-whisker--86c567bcd9--r5jn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86c567bcd9--r5jn5-eth0", GenerateName:"whisker-86c567bcd9-", Namespace:"calico-system", SelfLink:"", UID:"6f1cb5a0-7db4-483a-9554-eea8e26ca91e", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86c567bcd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-86c567bcd9-r5jn5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidc7bf1be8c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:45.292860 containerd[1622]: 2025-11-05 15:50:45.261 [INFO][3980] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" Namespace="calico-system" Pod="whisker-86c567bcd9-r5jn5" WorkloadEndpoint="localhost-k8s-whisker--86c567bcd9--r5jn5-eth0" Nov 5 15:50:45.292935 containerd[1622]: 2025-11-05 15:50:45.261 [INFO][3980] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc7bf1be8c2 ContainerID="1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" Namespace="calico-system" Pod="whisker-86c567bcd9-r5jn5" WorkloadEndpoint="localhost-k8s-whisker--86c567bcd9--r5jn5-eth0" Nov 5 15:50:45.292935 containerd[1622]: 2025-11-05 15:50:45.272 [INFO][3980] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" Namespace="calico-system" Pod="whisker-86c567bcd9-r5jn5" WorkloadEndpoint="localhost-k8s-whisker--86c567bcd9--r5jn5-eth0" Nov 5 15:50:45.292984 containerd[1622]: 2025-11-05 15:50:45.272 [INFO][3980] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" Namespace="calico-system" Pod="whisker-86c567bcd9-r5jn5" WorkloadEndpoint="localhost-k8s-whisker--86c567bcd9--r5jn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86c567bcd9--r5jn5-eth0", GenerateName:"whisker-86c567bcd9-", Namespace:"calico-system", SelfLink:"", UID:"6f1cb5a0-7db4-483a-9554-eea8e26ca91e", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86c567bcd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc", Pod:"whisker-86c567bcd9-r5jn5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidc7bf1be8c2", MAC:"d6:ec:15:07:cc:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:45.293037 containerd[1622]: 2025-11-05 15:50:45.288 [INFO][3980] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" Namespace="calico-system" Pod="whisker-86c567bcd9-r5jn5" WorkloadEndpoint="localhost-k8s-whisker--86c567bcd9--r5jn5-eth0" Nov 5 15:50:45.370725 systemd-networkd[1509]: cali80ec169001d: Link UP Nov 5 15:50:45.372055 systemd-networkd[1509]: cali80ec169001d: Gained carrier Nov 5 15:50:45.388239 containerd[1622]: 2025-11-05 15:50:45.277 [INFO][4002] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:50:45.388239 containerd[1622]: 2025-11-05 15:50:45.292 [INFO][4002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--nqspw-eth0 coredns-674b8bbfcf- kube-system 87fd8eaf-920d-486a-9257-d765234a7603 842 0 2025-11-05 15:50:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-nqspw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali80ec169001d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqspw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqspw-" Nov 5 15:50:45.388239 containerd[1622]: 2025-11-05 15:50:45.292 [INFO][4002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqspw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqspw-eth0" Nov 5 15:50:45.388239 containerd[1622]: 2025-11-05 15:50:45.326 [INFO][4023] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" HandleID="k8s-pod-network.85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" Workload="localhost-k8s-coredns--674b8bbfcf--nqspw-eth0" Nov 5 15:50:45.388605 containerd[1622]: 2025-11-05 15:50:45.326 [INFO][4023] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" HandleID="k8s-pod-network.85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" Workload="localhost-k8s-coredns--674b8bbfcf--nqspw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7020), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-nqspw", "timestamp":"2025-11-05 15:50:45.326740641 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:50:45.388605 containerd[1622]: 2025-11-05 15:50:45.326 [INFO][4023] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:50:45.388605 containerd[1622]: 2025-11-05 15:50:45.327 [INFO][4023] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:50:45.388605 containerd[1622]: 2025-11-05 15:50:45.327 [INFO][4023] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:50:45.388605 containerd[1622]: 2025-11-05 15:50:45.335 [INFO][4023] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" host="localhost" Nov 5 15:50:45.388605 containerd[1622]: 2025-11-05 15:50:45.342 [INFO][4023] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:50:45.388605 containerd[1622]: 2025-11-05 15:50:45.347 [INFO][4023] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:50:45.388605 containerd[1622]: 2025-11-05 15:50:45.350 [INFO][4023] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:45.388605 containerd[1622]: 2025-11-05 15:50:45.352 [INFO][4023] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:45.388605 containerd[1622]: 2025-11-05 15:50:45.352 [INFO][4023] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" host="localhost" Nov 5 15:50:45.388947 containerd[1622]: 2025-11-05 15:50:45.354 [INFO][4023] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee Nov 5 15:50:45.388947 containerd[1622]: 2025-11-05 15:50:45.359 [INFO][4023] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" host="localhost" Nov 5 15:50:45.388947 containerd[1622]: 2025-11-05 15:50:45.364 [INFO][4023] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" host="localhost" Nov 5 15:50:45.388947 containerd[1622]: 2025-11-05 15:50:45.364 [INFO][4023] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" host="localhost" Nov 5 15:50:45.388947 containerd[1622]: 2025-11-05 15:50:45.364 [INFO][4023] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:50:45.388947 containerd[1622]: 2025-11-05 15:50:45.364 [INFO][4023] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" HandleID="k8s-pod-network.85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" Workload="localhost-k8s-coredns--674b8bbfcf--nqspw-eth0" Nov 5 15:50:45.389119 containerd[1622]: 2025-11-05 15:50:45.368 [INFO][4002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqspw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqspw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nqspw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"87fd8eaf-920d-486a-9257-d765234a7603", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-nqspw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80ec169001d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:45.389223 containerd[1622]: 2025-11-05 15:50:45.368 [INFO][4002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqspw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqspw-eth0" Nov 5 15:50:45.389223 containerd[1622]: 2025-11-05 15:50:45.368 [INFO][4002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80ec169001d ContainerID="85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqspw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqspw-eth0" Nov 5 15:50:45.389223 containerd[1622]: 2025-11-05 15:50:45.372 [INFO][4002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqspw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqspw-eth0" Nov 5 15:50:45.389572 containerd[1622]: 2025-11-05 15:50:45.372 [INFO][4002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqspw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqspw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nqspw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"87fd8eaf-920d-486a-9257-d765234a7603", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee", Pod:"coredns-674b8bbfcf-nqspw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80ec169001d", MAC:"ce:b6:d3:8b:de:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:45.389572 containerd[1622]: 2025-11-05 15:50:45.383 [INFO][4002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqspw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqspw-eth0" Nov 5 15:50:45.409456 kubelet[2812]: E1105 15:50:45.409401 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:45.427684 containerd[1622]: time="2025-11-05T15:50:45.427118102Z" level=info msg="connecting to shim 1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc" address="unix:///run/containerd/s/0628f68285ae6631ba46d9cc9e7704f854b0086c22e3aa7cd20513e0f68103c6" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:45.463792 systemd[1]: Started cri-containerd-1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc.scope - libcontainer container 1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc. Nov 5 15:50:45.484777 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:50:45.527622 containerd[1622]: time="2025-11-05T15:50:45.525536321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2487af145ee0191bd4e69f85bc46ecb4b1fb66275b49fc5ea3643b170e8bd3b\" id:\"f3b89e98c0e056e306f5cdd020041a41ff5ee48e1ec5c3e6765f28186b8e9645\" pid:4067 exit_status:1 exited_at:{seconds:1762357845 nanos:524956812}" Nov 5 15:50:45.587532 containerd[1622]: time="2025-11-05T15:50:45.587464020Z" level=info msg="connecting to shim 85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee" address="unix:///run/containerd/s/2567da4230a42e45aa35f7242d46d7548d67fc482fbbbb8662bd3ca77e029b20" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:45.618700 systemd[1]: Started cri-containerd-85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee.scope - libcontainer container 85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee. Nov 5 15:50:45.641882 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:50:45.675074 containerd[1622]: time="2025-11-05T15:50:45.674970742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86c567bcd9-r5jn5,Uid:6f1cb5a0-7db4-483a-9554-eea8e26ca91e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1139af04dd5636f13b5cfdddc33f4f4b099137254b4c95b5469fc28d4a9442fc\"" Nov 5 15:50:45.694032 containerd[1622]: time="2025-11-05T15:50:45.693712566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:50:45.915202 containerd[1622]: time="2025-11-05T15:50:45.914759595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nqspw,Uid:87fd8eaf-920d-486a-9257-d765234a7603,Namespace:kube-system,Attempt:0,} returns sandbox id \"85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee\"" Nov 5 15:50:45.916604 kubelet[2812]: E1105 15:50:45.916543 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:46.239464 containerd[1622]: time="2025-11-05T15:50:46.238921273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6ffdc5-7z4nf,Uid:e24ec55b-ca98-450e-ad08-bd8f75c310ad,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:50:46.240015 containerd[1622]: time="2025-11-05T15:50:46.239929709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6ffdc5-9kwmw,Uid:ace081d4-c73d-4d8d-b64e-ba5786790ea2,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:50:46.243020 systemd-networkd[1509]: vxlan.calico: Link UP Nov 5 15:50:46.243028 systemd-networkd[1509]: vxlan.calico: Gained carrier Nov 5 15:50:46.257516 containerd[1622]: time="2025-11-05T15:50:46.257463213Z" level=info msg="CreateContainer within sandbox \"85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:50:46.329583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123859121.mount: Deactivated successfully. Nov 5 15:50:46.332025 containerd[1622]: time="2025-11-05T15:50:46.331974755Z" level=info msg="Container 04cb2bc1bdc5d951f25f6bb13d4c6d89773baf272d2b5733534292477e1ed5ad: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:46.479835 systemd-networkd[1509]: cali80ec169001d: Gained IPv6LL Nov 5 15:50:46.541690 systemd-networkd[1509]: calic292fdb09eb: Link UP Nov 5 15:50:46.542014 systemd-networkd[1509]: calic292fdb09eb: Gained carrier Nov 5 15:50:46.594501 containerd[1622]: time="2025-11-05T15:50:46.594436748Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:46.608326 containerd[1622]: time="2025-11-05T15:50:46.608263712Z" level=info msg="CreateContainer within sandbox \"85332fa5c5cf8037b0c2f8e737e24b22c474a8f4ed0a06a608ce48669deddcee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04cb2bc1bdc5d951f25f6bb13d4c6d89773baf272d2b5733534292477e1ed5ad\"" Nov 5 15:50:46.608769 containerd[1622]: time="2025-11-05T15:50:46.608491531Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:50:46.608769 containerd[1622]: time="2025-11-05T15:50:46.608519515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:50:46.609235 containerd[1622]: time="2025-11-05T15:50:46.609054377Z" level=info msg="StartContainer for \"04cb2bc1bdc5d951f25f6bb13d4c6d89773baf272d2b5733534292477e1ed5ad\"" Nov 5 15:50:46.609293 kubelet[2812]: E1105 15:50:46.609069 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:50:46.609293 kubelet[2812]: E1105 15:50:46.609122 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:50:46.613273 containerd[1622]: time="2025-11-05T15:50:46.613121223Z" level=info msg="connecting to shim 04cb2bc1bdc5d951f25f6bb13d4c6d89773baf272d2b5733534292477e1ed5ad" address="unix:///run/containerd/s/2567da4230a42e45aa35f7242d46d7548d67fc482fbbbb8662bd3ca77e029b20" protocol=ttrpc version=3 Nov 5 15:50:46.618192 kubelet[2812]: E1105 15:50:46.618091 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:85a8d18e420d47d0bcbe7a43e311f448,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vmkwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86c567bcd9-r5jn5_calico-system(6f1cb5a0-7db4-483a-9554-eea8e26ca91e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:46.620518 containerd[1622]: time="2025-11-05T15:50:46.620145323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.329 [INFO][4319] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--698b6ffdc5--7z4nf-eth0 calico-apiserver-698b6ffdc5- calico-apiserver e24ec55b-ca98-450e-ad08-bd8f75c310ad 844 0 2025-11-05 15:50:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:698b6ffdc5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-698b6ffdc5-7z4nf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic292fdb09eb [] [] }} ContainerID="957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-7z4nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--7z4nf-" Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.330 [INFO][4319] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-7z4nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--7z4nf-eth0" Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.384 [INFO][4347] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" HandleID="k8s-pod-network.957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" Workload="localhost-k8s-calico--apiserver--698b6ffdc5--7z4nf-eth0" Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.385 [INFO][4347] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" HandleID="k8s-pod-network.957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" Workload="localhost-k8s-calico--apiserver--698b6ffdc5--7z4nf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004faf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-698b6ffdc5-7z4nf", "timestamp":"2025-11-05 15:50:46.384992353 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.385 [INFO][4347] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.385 [INFO][4347] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.385 [INFO][4347] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.395 [INFO][4347] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" host="localhost" Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.402 [INFO][4347] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.409 [INFO][4347] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.413 [INFO][4347] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.417 [INFO][4347] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.417 [INFO][4347] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" host="localhost" Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.420 [INFO][4347] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.473 [INFO][4347] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" host="localhost" Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.528 [INFO][4347] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" host="localhost" Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.528 [INFO][4347] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" host="localhost" Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.529 [INFO][4347] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:50:46.621521 containerd[1622]: 2025-11-05 15:50:46.529 [INFO][4347] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" HandleID="k8s-pod-network.957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" Workload="localhost-k8s-calico--apiserver--698b6ffdc5--7z4nf-eth0" Nov 5 15:50:46.623130 containerd[1622]: 2025-11-05 15:50:46.536 [INFO][4319] cni-plugin/k8s.go 418: Populated endpoint ContainerID="957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-7z4nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--7z4nf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698b6ffdc5--7z4nf-eth0", GenerateName:"calico-apiserver-698b6ffdc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e24ec55b-ca98-450e-ad08-bd8f75c310ad", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b6ffdc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-698b6ffdc5-7z4nf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic292fdb09eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:46.623130 containerd[1622]: 2025-11-05 15:50:46.536 [INFO][4319] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-7z4nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--7z4nf-eth0" Nov 5 15:50:46.623130 containerd[1622]: 2025-11-05 15:50:46.536 [INFO][4319] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic292fdb09eb ContainerID="957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-7z4nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--7z4nf-eth0" Nov 5 15:50:46.623130 containerd[1622]: 2025-11-05 15:50:46.541 [INFO][4319] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-7z4nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--7z4nf-eth0" Nov 5 15:50:46.623130 containerd[1622]: 2025-11-05 15:50:46.541 [INFO][4319] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-7z4nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--7z4nf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698b6ffdc5--7z4nf-eth0", GenerateName:"calico-apiserver-698b6ffdc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e24ec55b-ca98-450e-ad08-bd8f75c310ad", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b6ffdc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be", Pod:"calico-apiserver-698b6ffdc5-7z4nf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic292fdb09eb", MAC:"aa:02:ae:99:68:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:46.623130 containerd[1622]: 2025-11-05 15:50:46.615 [INFO][4319] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-7z4nf" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--7z4nf-eth0" Nov 5 15:50:46.682896 systemd[1]: Started cri-containerd-04cb2bc1bdc5d951f25f6bb13d4c6d89773baf272d2b5733534292477e1ed5ad.scope - libcontainer container 04cb2bc1bdc5d951f25f6bb13d4c6d89773baf272d2b5733534292477e1ed5ad. Nov 5 15:50:46.689797 containerd[1622]: time="2025-11-05T15:50:46.689531648Z" level=info msg="connecting to shim 957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be" address="unix:///run/containerd/s/3b7c53b48018dce82f9ccd1370281719cd811f5577db94bc24d496950f9f9fbf" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:46.697180 systemd-networkd[1509]: cali2b43de416eb: Link UP Nov 5 15:50:46.700590 systemd-networkd[1509]: cali2b43de416eb: Gained carrier Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.360 [INFO][4330] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--698b6ffdc5--9kwmw-eth0 calico-apiserver-698b6ffdc5- calico-apiserver ace081d4-c73d-4d8d-b64e-ba5786790ea2 847 0 2025-11-05 15:50:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:698b6ffdc5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-698b6ffdc5-9kwmw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2b43de416eb [] [] }} ContainerID="eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-9kwmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--9kwmw-" Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.361 [INFO][4330] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-9kwmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--9kwmw-eth0" Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.419 [INFO][4357] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" HandleID="k8s-pod-network.eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" Workload="localhost-k8s-calico--apiserver--698b6ffdc5--9kwmw-eth0" Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.419 [INFO][4357] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" HandleID="k8s-pod-network.eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" Workload="localhost-k8s-calico--apiserver--698b6ffdc5--9kwmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038e870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-698b6ffdc5-9kwmw", "timestamp":"2025-11-05 15:50:46.419183101 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.419 [INFO][4357] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.529 [INFO][4357] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.529 [INFO][4357] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.543 [INFO][4357] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" host="localhost" Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.617 [INFO][4357] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.628 [INFO][4357] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.631 [INFO][4357] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.638 [INFO][4357] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.638 [INFO][4357] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" host="localhost" Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.648 [INFO][4357] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.658 [INFO][4357] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" host="localhost" Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.672 [INFO][4357] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" host="localhost" Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.677 [INFO][4357] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" host="localhost" Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.677 [INFO][4357] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:50:46.727843 containerd[1622]: 2025-11-05 15:50:46.678 [INFO][4357] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" HandleID="k8s-pod-network.eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" Workload="localhost-k8s-calico--apiserver--698b6ffdc5--9kwmw-eth0" Nov 5 15:50:46.730024 containerd[1622]: 2025-11-05 15:50:46.690 [INFO][4330] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-9kwmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--9kwmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698b6ffdc5--9kwmw-eth0", GenerateName:"calico-apiserver-698b6ffdc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"ace081d4-c73d-4d8d-b64e-ba5786790ea2", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b6ffdc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-698b6ffdc5-9kwmw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b43de416eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:46.730024 containerd[1622]: 2025-11-05 15:50:46.690 [INFO][4330] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-9kwmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--9kwmw-eth0" Nov 5 15:50:46.730024 containerd[1622]: 2025-11-05 15:50:46.690 [INFO][4330] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b43de416eb ContainerID="eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-9kwmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--9kwmw-eth0" Nov 5 15:50:46.730024 containerd[1622]: 2025-11-05 15:50:46.701 [INFO][4330] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-9kwmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--9kwmw-eth0" Nov 5 15:50:46.730024 containerd[1622]: 2025-11-05 15:50:46.704 [INFO][4330] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-9kwmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--9kwmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698b6ffdc5--9kwmw-eth0", GenerateName:"calico-apiserver-698b6ffdc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"ace081d4-c73d-4d8d-b64e-ba5786790ea2", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b6ffdc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d", Pod:"calico-apiserver-698b6ffdc5-9kwmw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b43de416eb", MAC:"0e:ee:04:4d:b7:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:46.730024 containerd[1622]: 2025-11-05 15:50:46.719 [INFO][4330] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" Namespace="calico-apiserver" Pod="calico-apiserver-698b6ffdc5-9kwmw" WorkloadEndpoint="localhost-k8s-calico--apiserver--698b6ffdc5--9kwmw-eth0" Nov 5 15:50:46.735183 systemd[1]: Started cri-containerd-957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be.scope - libcontainer container 957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be. Nov 5 15:50:46.761740 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:50:46.762786 containerd[1622]: time="2025-11-05T15:50:46.762749013Z" level=info msg="StartContainer for \"04cb2bc1bdc5d951f25f6bb13d4c6d89773baf272d2b5733534292477e1ed5ad\" returns successfully" Nov 5 15:50:46.782905 containerd[1622]: time="2025-11-05T15:50:46.782841384Z" level=info msg="connecting to shim eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d" address="unix:///run/containerd/s/0d9144015280d3a7cc192ccf32514bdb54ec70f447ab762da49b8c047da9df7b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:46.815350 containerd[1622]: time="2025-11-05T15:50:46.813212354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6ffdc5-7z4nf,Uid:e24ec55b-ca98-450e-ad08-bd8f75c310ad,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"957aee3c48fa7db0fcd8b23b9909ba6a8814ca6aa1b803c2a6b53f8e9d0a48be\"" Nov 5 15:50:46.823922 systemd[1]: Started cri-containerd-eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d.scope - libcontainer container eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d. Nov 5 15:50:46.857935 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:50:46.914678 containerd[1622]: time="2025-11-05T15:50:46.914606616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6ffdc5-9kwmw,Uid:ace081d4-c73d-4d8d-b64e-ba5786790ea2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"eae5b8c8e171d2f4b705134d3a15af2e0454adc362f57e51698bf61cfe63555d\"" Nov 5 15:50:47.055978 systemd-networkd[1509]: calidc7bf1be8c2: Gained IPv6LL Nov 5 15:50:47.102403 containerd[1622]: time="2025-11-05T15:50:47.102236360Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:47.103900 containerd[1622]: time="2025-11-05T15:50:47.103820494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:50:47.104003 containerd[1622]: time="2025-11-05T15:50:47.103890158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:50:47.104198 kubelet[2812]: E1105 15:50:47.104148 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:50:47.104317 kubelet[2812]: E1105 15:50:47.104216 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:50:47.104579 kubelet[2812]: E1105 15:50:47.104510 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmkwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86c567bcd9-r5jn5_calico-system(6f1cb5a0-7db4-483a-9554-eea8e26ca91e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:47.104738 containerd[1622]: time="2025-11-05T15:50:47.104669761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:50:47.105905 kubelet[2812]: E1105 15:50:47.105831 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86c567bcd9-r5jn5" podUID="6f1cb5a0-7db4-483a-9554-eea8e26ca91e" Nov 5 15:50:47.429230 kubelet[2812]: E1105 15:50:47.429056 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:47.436878 kubelet[2812]: E1105 15:50:47.436819 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86c567bcd9-r5jn5" podUID="6f1cb5a0-7db4-483a-9554-eea8e26ca91e" Nov 5 15:50:47.447214 kubelet[2812]: I1105 15:50:47.447120 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nqspw" podStartSLOduration=38.447092971000004 podStartE2EDuration="38.447092971s" podCreationTimestamp="2025-11-05 15:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:47.444732832 +0000 UTC m=+44.309408207" watchObservedRunningTime="2025-11-05 15:50:47.447092971 +0000 UTC m=+44.311768316" Nov 5 15:50:47.495404 containerd[1622]: time="2025-11-05T15:50:47.495313144Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:47.496825 containerd[1622]: time="2025-11-05T15:50:47.496790352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:50:47.496943 containerd[1622]: time="2025-11-05T15:50:47.496870476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:50:47.497202 kubelet[2812]: E1105 15:50:47.497122 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:50:47.497284 kubelet[2812]: E1105 15:50:47.497208 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:50:47.497688 kubelet[2812]: E1105 15:50:47.497599 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ffgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-698b6ffdc5-7z4nf_calico-apiserver(e24ec55b-ca98-450e-ad08-bd8f75c310ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:47.497979 containerd[1622]: time="2025-11-05T15:50:47.497659887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:50:47.499422 kubelet[2812]: E1105 15:50:47.498922 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-7z4nf" podUID="e24ec55b-ca98-450e-ad08-bd8f75c310ad" Nov 5 15:50:47.858507 containerd[1622]: time="2025-11-05T15:50:47.858422880Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:47.860373 containerd[1622]: time="2025-11-05T15:50:47.860287405Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:50:47.860441 containerd[1622]: time="2025-11-05T15:50:47.860386957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:50:47.860710 kubelet[2812]: E1105 15:50:47.860656 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:50:47.861326 kubelet[2812]: E1105 15:50:47.860728 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:50:47.861326 kubelet[2812]: E1105 15:50:47.860883 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jnlq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-698b6ffdc5-9kwmw_calico-apiserver(ace081d4-c73d-4d8d-b64e-ba5786790ea2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:47.862516 kubelet[2812]: E1105 15:50:47.862459 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-9kwmw" podUID="ace081d4-c73d-4d8d-b64e-ba5786790ea2" Nov 5 15:50:47.952812 systemd-networkd[1509]: vxlan.calico: Gained IPv6LL Nov 5 15:50:48.143941 systemd-networkd[1509]: cali2b43de416eb: Gained IPv6LL Nov 5 15:50:48.207919 systemd-networkd[1509]: calic292fdb09eb: Gained IPv6LL Nov 5 15:50:48.238626 containerd[1622]: time="2025-11-05T15:50:48.238576739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5fldj,Uid:908be0d9-6b2b-4915-9d34-62f14a2dce18,Namespace:calico-system,Attempt:0,}" Nov 5 15:50:48.238853 containerd[1622]: time="2025-11-05T15:50:48.238576909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lh7wv,Uid:3c6882bb-0885-494e-b1d2-fd2e09cf28b1,Namespace:calico-system,Attempt:0,}" Nov 5 15:50:48.371545 systemd-networkd[1509]: cali94e70a28816: Link UP Nov 5 15:50:48.372613 systemd-networkd[1509]: cali94e70a28816: Gained carrier Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.289 [INFO][4558] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--lh7wv-eth0 goldmane-666569f655- calico-system 3c6882bb-0885-494e-b1d2-fd2e09cf28b1 845 0 2025-11-05 15:50:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-lh7wv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali94e70a28816 [] [] }} ContainerID="20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" Namespace="calico-system" Pod="goldmane-666569f655-lh7wv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lh7wv-" Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.290 [INFO][4558] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" Namespace="calico-system" Pod="goldmane-666569f655-lh7wv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lh7wv-eth0" Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.320 [INFO][4579] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" HandleID="k8s-pod-network.20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" Workload="localhost-k8s-goldmane--666569f655--lh7wv-eth0" Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.321 [INFO][4579] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" HandleID="k8s-pod-network.20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" Workload="localhost-k8s-goldmane--666569f655--lh7wv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034cfd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-lh7wv", "timestamp":"2025-11-05 15:50:48.320986091 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.321 [INFO][4579] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.321 [INFO][4579] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.321 [INFO][4579] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.329 [INFO][4579] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" host="localhost" Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.335 [INFO][4579] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.339 [INFO][4579] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.342 [INFO][4579] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.345 [INFO][4579] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.345 [INFO][4579] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" host="localhost" Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.347 [INFO][4579] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.353 [INFO][4579] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" host="localhost" Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.364 [INFO][4579] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" host="localhost" Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.364 [INFO][4579] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" host="localhost" Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.364 [INFO][4579] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:50:48.388202 containerd[1622]: 2025-11-05 15:50:48.364 [INFO][4579] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" HandleID="k8s-pod-network.20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" Workload="localhost-k8s-goldmane--666569f655--lh7wv-eth0" Nov 5 15:50:48.389292 containerd[1622]: 2025-11-05 15:50:48.366 [INFO][4558] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" Namespace="calico-system" Pod="goldmane-666569f655-lh7wv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lh7wv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--lh7wv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3c6882bb-0885-494e-b1d2-fd2e09cf28b1", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-lh7wv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali94e70a28816", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:48.389292 containerd[1622]: 2025-11-05 15:50:48.367 [INFO][4558] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" Namespace="calico-system" Pod="goldmane-666569f655-lh7wv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lh7wv-eth0" Nov 5 15:50:48.389292 containerd[1622]: 2025-11-05 15:50:48.367 [INFO][4558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94e70a28816 ContainerID="20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" Namespace="calico-system" Pod="goldmane-666569f655-lh7wv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lh7wv-eth0" Nov 5 15:50:48.389292 containerd[1622]: 2025-11-05 15:50:48.372 [INFO][4558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" Namespace="calico-system" Pod="goldmane-666569f655-lh7wv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lh7wv-eth0" Nov 5 15:50:48.389292 containerd[1622]: 2025-11-05 15:50:48.372 [INFO][4558] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" Namespace="calico-system" Pod="goldmane-666569f655-lh7wv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lh7wv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--lh7wv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3c6882bb-0885-494e-b1d2-fd2e09cf28b1", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb", Pod:"goldmane-666569f655-lh7wv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali94e70a28816", MAC:"12:6e:d6:fc:f4:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:48.389292 containerd[1622]: 2025-11-05 15:50:48.384 [INFO][4558] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" Namespace="calico-system" Pod="goldmane-666569f655-lh7wv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lh7wv-eth0" Nov 5 15:50:48.413684 containerd[1622]: time="2025-11-05T15:50:48.412986097Z" level=info msg="connecting to shim 20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb" address="unix:///run/containerd/s/dd6e392c38c456554348f4b11424ba4919d2a4ca65f83a8d4f7ef869af6041ef" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:48.438188 kubelet[2812]: E1105 15:50:48.438044 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:48.440663 kubelet[2812]: E1105 15:50:48.440352 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-9kwmw" podUID="ace081d4-c73d-4d8d-b64e-ba5786790ea2" Nov 5 15:50:48.440814 kubelet[2812]: E1105 15:50:48.440724 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-7z4nf" podUID="e24ec55b-ca98-450e-ad08-bd8f75c310ad" Nov 5 15:50:48.441875 systemd[1]: Started cri-containerd-20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb.scope - libcontainer container 20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb. Nov 5 15:50:48.458970 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:50:48.506607 containerd[1622]: time="2025-11-05T15:50:48.506402733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lh7wv,Uid:3c6882bb-0885-494e-b1d2-fd2e09cf28b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"20bee03cd6faefbb2fdfa8d851e6247bcac9c498e98602376374bf91c6b506eb\"" Nov 5 15:50:48.511561 containerd[1622]: time="2025-11-05T15:50:48.511499019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:50:48.526997 systemd-networkd[1509]: cali49a2f84ede8: Link UP Nov 5 15:50:48.530831 systemd-networkd[1509]: cali49a2f84ede8: Gained carrier Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.296 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5fldj-eth0 csi-node-driver- calico-system 908be0d9-6b2b-4915-9d34-62f14a2dce18 727 0 2025-11-05 15:50:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5fldj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali49a2f84ede8 [] [] }} ContainerID="4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" Namespace="calico-system" Pod="csi-node-driver-5fldj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5fldj-" Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.296 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" Namespace="calico-system" Pod="csi-node-driver-5fldj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5fldj-eth0" Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.324 [INFO][4585] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" HandleID="k8s-pod-network.4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" Workload="localhost-k8s-csi--node--driver--5fldj-eth0" Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.324 [INFO][4585] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" HandleID="k8s-pod-network.4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" Workload="localhost-k8s-csi--node--driver--5fldj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a3490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5fldj", "timestamp":"2025-11-05 15:50:48.324456786 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.324 [INFO][4585] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.364 [INFO][4585] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.364 [INFO][4585] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.430 [INFO][4585] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" host="localhost" Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.442 [INFO][4585] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.488 [INFO][4585] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.494 [INFO][4585] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.502 [INFO][4585] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.502 [INFO][4585] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" host="localhost" Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.506 [INFO][4585] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649 Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.511 [INFO][4585] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" host="localhost" Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.520 [INFO][4585] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" host="localhost" Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.520 [INFO][4585] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" host="localhost" Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.521 [INFO][4585] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:50:48.551461 containerd[1622]: 2025-11-05 15:50:48.521 [INFO][4585] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" HandleID="k8s-pod-network.4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" Workload="localhost-k8s-csi--node--driver--5fldj-eth0" Nov 5 15:50:48.552201 containerd[1622]: 2025-11-05 15:50:48.524 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" Namespace="calico-system" Pod="csi-node-driver-5fldj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5fldj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5fldj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"908be0d9-6b2b-4915-9d34-62f14a2dce18", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5fldj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49a2f84ede8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:48.552201 containerd[1622]: 2025-11-05 15:50:48.524 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" Namespace="calico-system" Pod="csi-node-driver-5fldj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5fldj-eth0" Nov 5 15:50:48.552201 containerd[1622]: 2025-11-05 15:50:48.524 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49a2f84ede8 ContainerID="4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" Namespace="calico-system" Pod="csi-node-driver-5fldj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5fldj-eth0" Nov 5 15:50:48.552201 containerd[1622]: 2025-11-05 15:50:48.532 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" Namespace="calico-system" Pod="csi-node-driver-5fldj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5fldj-eth0" Nov 5 15:50:48.552201 containerd[1622]: 2025-11-05 15:50:48.532 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" Namespace="calico-system" Pod="csi-node-driver-5fldj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5fldj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5fldj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"908be0d9-6b2b-4915-9d34-62f14a2dce18", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649", Pod:"csi-node-driver-5fldj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49a2f84ede8", MAC:"2e:3d:a7:ad:1b:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:48.552201 containerd[1622]: 2025-11-05 15:50:48.546 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" Namespace="calico-system" Pod="csi-node-driver-5fldj" WorkloadEndpoint="localhost-k8s-csi--node--driver--5fldj-eth0" Nov 5 15:50:48.583046 containerd[1622]: time="2025-11-05T15:50:48.581545327Z" level=info msg="connecting to shim 4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649" address="unix:///run/containerd/s/5699575bdaefed8ac18c45d8f2b48553c20d6fd170d719076eb4e646b5ac5245" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:48.614935 systemd[1]: Started cri-containerd-4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649.scope - libcontainer container 4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649. Nov 5 15:50:48.631896 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:50:48.649667 containerd[1622]: time="2025-11-05T15:50:48.649599817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5fldj,Uid:908be0d9-6b2b-4915-9d34-62f14a2dce18,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e31069bb51278a41f5ce574e2972b83e747a08ede256c9985ee911249865649\"" Nov 5 15:50:48.936610 containerd[1622]: time="2025-11-05T15:50:48.936533538Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:48.938410 containerd[1622]: time="2025-11-05T15:50:48.938309019Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:50:48.938617 containerd[1622]: time="2025-11-05T15:50:48.938351931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:50:48.938793 kubelet[2812]: E1105 15:50:48.938679 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:50:48.938793 kubelet[2812]: E1105 15:50:48.938748 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:50:48.939652 kubelet[2812]: E1105 15:50:48.939338 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtlfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lh7wv_calico-system(3c6882bb-0885-494e-b1d2-fd2e09cf28b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:48.939778 containerd[1622]: time="2025-11-05T15:50:48.939237537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:50:48.940698 kubelet[2812]: E1105 15:50:48.940659 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lh7wv" podUID="3c6882bb-0885-494e-b1d2-fd2e09cf28b1" Nov 5 15:50:49.242340 kubelet[2812]: E1105 15:50:49.242209 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:49.244220 containerd[1622]: time="2025-11-05T15:50:49.243840352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-whfpd,Uid:4921c0fe-4612-403c-bd27-6abad435a5f4,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:49.244299 containerd[1622]: time="2025-11-05T15:50:49.244271342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc69ccdf-27hbr,Uid:9efbb451-21a8-4af2-826d-c29a518d9d96,Namespace:calico-system,Attempt:0,}" Nov 5 15:50:49.313122 containerd[1622]: time="2025-11-05T15:50:49.313054152Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:49.442084 kubelet[2812]: E1105 15:50:49.442004 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lh7wv" podUID="3c6882bb-0885-494e-b1d2-fd2e09cf28b1" Nov 5 15:50:49.442516 kubelet[2812]: E1105 15:50:49.442485 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:49.502088 containerd[1622]: time="2025-11-05T15:50:49.501766965Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:50:49.502088 containerd[1622]: time="2025-11-05T15:50:49.501850937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:50:49.502255 kubelet[2812]: E1105 15:50:49.502098 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:50:49.502255 kubelet[2812]: E1105 15:50:49.502158 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:50:49.502432 kubelet[2812]: E1105 15:50:49.502326 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffvt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fldj_calico-system(908be0d9-6b2b-4915-9d34-62f14a2dce18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:49.505841 containerd[1622]: time="2025-11-05T15:50:49.505807503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:50:49.659978 systemd-networkd[1509]: calic5d80068a5f: Link UP Nov 5 15:50:49.661026 systemd-networkd[1509]: calic5d80068a5f: Gained carrier Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.568 [INFO][4723] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--dc69ccdf--27hbr-eth0 calico-kube-controllers-dc69ccdf- calico-system 9efbb451-21a8-4af2-826d-c29a518d9d96 843 0 2025-11-05 15:50:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:dc69ccdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-dc69ccdf-27hbr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic5d80068a5f [] [] }} ContainerID="8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" Namespace="calico-system" Pod="calico-kube-controllers-dc69ccdf-27hbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc69ccdf--27hbr-" Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.569 [INFO][4723] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" Namespace="calico-system" Pod="calico-kube-controllers-dc69ccdf-27hbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc69ccdf--27hbr-eth0" Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.611 [INFO][4742] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" HandleID="k8s-pod-network.8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" Workload="localhost-k8s-calico--kube--controllers--dc69ccdf--27hbr-eth0" Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.611 [INFO][4742] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" HandleID="k8s-pod-network.8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" Workload="localhost-k8s-calico--kube--controllers--dc69ccdf--27hbr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000581300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-dc69ccdf-27hbr", "timestamp":"2025-11-05 15:50:49.611246904 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.611 [INFO][4742] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.611 [INFO][4742] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.611 [INFO][4742] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.621 [INFO][4742] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" host="localhost" Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.628 [INFO][4742] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.633 [INFO][4742] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.636 [INFO][4742] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.639 [INFO][4742] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.639 [INFO][4742] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" host="localhost" Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.640 [INFO][4742] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800 Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.645 [INFO][4742] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" host="localhost" Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.652 [INFO][4742] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" host="localhost" Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.652 [INFO][4742] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" host="localhost" Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.652 [INFO][4742] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:50:49.680190 containerd[1622]: 2025-11-05 15:50:49.652 [INFO][4742] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" HandleID="k8s-pod-network.8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" Workload="localhost-k8s-calico--kube--controllers--dc69ccdf--27hbr-eth0" Nov 5 15:50:49.681365 containerd[1622]: 2025-11-05 15:50:49.656 [INFO][4723] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" Namespace="calico-system" Pod="calico-kube-controllers-dc69ccdf-27hbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc69ccdf--27hbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dc69ccdf--27hbr-eth0", GenerateName:"calico-kube-controllers-dc69ccdf-", Namespace:"calico-system", SelfLink:"", UID:"9efbb451-21a8-4af2-826d-c29a518d9d96", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc69ccdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-dc69ccdf-27hbr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5d80068a5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:49.681365 containerd[1622]: 2025-11-05 15:50:49.656 [INFO][4723] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" Namespace="calico-system" Pod="calico-kube-controllers-dc69ccdf-27hbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc69ccdf--27hbr-eth0" Nov 5 15:50:49.681365 containerd[1622]: 2025-11-05 15:50:49.656 [INFO][4723] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5d80068a5f ContainerID="8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" Namespace="calico-system" Pod="calico-kube-controllers-dc69ccdf-27hbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc69ccdf--27hbr-eth0" Nov 5 15:50:49.681365 containerd[1622]: 2025-11-05 15:50:49.660 [INFO][4723] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" Namespace="calico-system" Pod="calico-kube-controllers-dc69ccdf-27hbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc69ccdf--27hbr-eth0" Nov 5 15:50:49.681365 containerd[1622]: 2025-11-05 15:50:49.661 [INFO][4723] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" Namespace="calico-system" Pod="calico-kube-controllers-dc69ccdf-27hbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc69ccdf--27hbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dc69ccdf--27hbr-eth0", GenerateName:"calico-kube-controllers-dc69ccdf-", Namespace:"calico-system", SelfLink:"", UID:"9efbb451-21a8-4af2-826d-c29a518d9d96", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc69ccdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800", Pod:"calico-kube-controllers-dc69ccdf-27hbr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5d80068a5f", MAC:"62:09:f7:af:ef:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:49.681365 containerd[1622]: 2025-11-05 15:50:49.675 [INFO][4723] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" Namespace="calico-system" Pod="calico-kube-controllers-dc69ccdf-27hbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc69ccdf--27hbr-eth0" Nov 5 15:50:49.714718 containerd[1622]: time="2025-11-05T15:50:49.714603384Z" level=info msg="connecting to shim 8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800" address="unix:///run/containerd/s/8b2f5ef5307df389ff0f4c3432cd6655c474d6fab3bfb80036d68f6c05292d18" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:49.758811 systemd[1]: Started cri-containerd-8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800.scope - libcontainer container 8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800. Nov 5 15:50:49.780080 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:50:49.812839 systemd-networkd[1509]: cali7782310d54d: Link UP Nov 5 15:50:49.814320 systemd-networkd[1509]: cali7782310d54d: Gained carrier Nov 5 15:50:49.822735 containerd[1622]: time="2025-11-05T15:50:49.822603985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc69ccdf-27hbr,Uid:9efbb451-21a8-4af2-826d-c29a518d9d96,Namespace:calico-system,Attempt:0,} returns sandbox id \"8422a7de0906c991d3f573f453da30b50162c2e5eb91c887317b93a347921800\"" Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.571 [INFO][4711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--whfpd-eth0 coredns-674b8bbfcf- kube-system 4921c0fe-4612-403c-bd27-6abad435a5f4 837 0 2025-11-05 15:50:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-whfpd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7782310d54d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" Namespace="kube-system" Pod="coredns-674b8bbfcf-whfpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--whfpd-" Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.571 [INFO][4711] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" Namespace="kube-system" Pod="coredns-674b8bbfcf-whfpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--whfpd-eth0" Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.612 [INFO][4744] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" HandleID="k8s-pod-network.f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" Workload="localhost-k8s-coredns--674b8bbfcf--whfpd-eth0" Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.612 [INFO][4744] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" HandleID="k8s-pod-network.f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" Workload="localhost-k8s-coredns--674b8bbfcf--whfpd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-whfpd", "timestamp":"2025-11-05 15:50:49.6121487 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.612 [INFO][4744] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.653 [INFO][4744] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.653 [INFO][4744] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.722 [INFO][4744] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" host="localhost" Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.734 [INFO][4744] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.742 [INFO][4744] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.746 [INFO][4744] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.749 [INFO][4744] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.750 [INFO][4744] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" host="localhost" Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.753 [INFO][4744] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.790 [INFO][4744] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" host="localhost" Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.801 [INFO][4744] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" host="localhost" Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.803 [INFO][4744] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" host="localhost" Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.803 [INFO][4744] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:50:49.832738 containerd[1622]: 2025-11-05 15:50:49.803 [INFO][4744] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" HandleID="k8s-pod-network.f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" Workload="localhost-k8s-coredns--674b8bbfcf--whfpd-eth0" Nov 5 15:50:49.833436 containerd[1622]: 2025-11-05 15:50:49.808 [INFO][4711] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" Namespace="kube-system" Pod="coredns-674b8bbfcf-whfpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--whfpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--whfpd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4921c0fe-4612-403c-bd27-6abad435a5f4", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-whfpd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7782310d54d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:49.833436 containerd[1622]: 2025-11-05 15:50:49.808 [INFO][4711] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" Namespace="kube-system" Pod="coredns-674b8bbfcf-whfpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--whfpd-eth0" Nov 5 15:50:49.833436 containerd[1622]: 2025-11-05 15:50:49.808 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7782310d54d ContainerID="f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" Namespace="kube-system" Pod="coredns-674b8bbfcf-whfpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--whfpd-eth0" Nov 5 15:50:49.833436 containerd[1622]: 2025-11-05 15:50:49.815 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" Namespace="kube-system" Pod="coredns-674b8bbfcf-whfpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--whfpd-eth0" Nov 5 15:50:49.833436 containerd[1622]: 2025-11-05 15:50:49.815 [INFO][4711] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" Namespace="kube-system" Pod="coredns-674b8bbfcf-whfpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--whfpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--whfpd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4921c0fe-4612-403c-bd27-6abad435a5f4", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 50, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e", Pod:"coredns-674b8bbfcf-whfpd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7782310d54d", MAC:"2e:79:4b:a7:f8:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:50:49.833436 containerd[1622]: 2025-11-05 15:50:49.826 [INFO][4711] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" Namespace="kube-system" Pod="coredns-674b8bbfcf-whfpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--whfpd-eth0" Nov 5 15:50:49.859333 containerd[1622]: time="2025-11-05T15:50:49.858299025Z" level=info msg="connecting to shim f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e" address="unix:///run/containerd/s/f4f9982a55b4e0ef3b267f492511f2f0826bab113ce2025f738fd671c171ed32" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:49.888870 systemd[1]: Started cri-containerd-f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e.scope - libcontainer container f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e. Nov 5 15:50:49.892770 containerd[1622]: time="2025-11-05T15:50:49.892726895Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:49.894575 containerd[1622]: time="2025-11-05T15:50:49.894500018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:50:49.894645 containerd[1622]: time="2025-11-05T15:50:49.894603637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:50:49.895004 kubelet[2812]: E1105 15:50:49.894950 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:50:49.895103 kubelet[2812]: E1105 15:50:49.895017 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:50:49.895347 kubelet[2812]: E1105 15:50:49.895295 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffvt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fldj_calico-system(908be0d9-6b2b-4915-9d34-62f14a2dce18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:49.896038 containerd[1622]: time="2025-11-05T15:50:49.896010325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:50:49.897407 kubelet[2812]: E1105 15:50:49.897338 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fldj" podUID="908be0d9-6b2b-4915-9d34-62f14a2dce18" Nov 5 15:50:49.910656 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:50:49.951672 containerd[1622]: time="2025-11-05T15:50:49.951578439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-whfpd,Uid:4921c0fe-4612-403c-bd27-6abad435a5f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e\"" Nov 5 15:50:49.952447 kubelet[2812]: E1105 15:50:49.952414 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:49.965437 containerd[1622]: time="2025-11-05T15:50:49.965367792Z" level=info msg="CreateContainer within sandbox \"f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:50:49.988742 containerd[1622]: time="2025-11-05T15:50:49.988683052Z" level=info msg="Container b50015e1b6773c80c498ce3376a52ab8093f159e87f3e3642aba82110f98676e: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:49.998842 containerd[1622]: time="2025-11-05T15:50:49.998785247Z" level=info msg="CreateContainer within sandbox \"f04e0f5d418784b6685a069831ad58647083fb7fa069a856e56103d3dc16aa3e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b50015e1b6773c80c498ce3376a52ab8093f159e87f3e3642aba82110f98676e\"" Nov 5 15:50:50.000672 containerd[1622]: time="2025-11-05T15:50:50.000014163Z" level=info msg="StartContainer for \"b50015e1b6773c80c498ce3376a52ab8093f159e87f3e3642aba82110f98676e\"" Nov 5 15:50:50.001403 containerd[1622]: time="2025-11-05T15:50:50.001356918Z" level=info msg="connecting to shim b50015e1b6773c80c498ce3376a52ab8093f159e87f3e3642aba82110f98676e" address="unix:///run/containerd/s/f4f9982a55b4e0ef3b267f492511f2f0826bab113ce2025f738fd671c171ed32" protocol=ttrpc version=3 Nov 5 15:50:50.039161 systemd[1]: Started cri-containerd-b50015e1b6773c80c498ce3376a52ab8093f159e87f3e3642aba82110f98676e.scope - libcontainer container b50015e1b6773c80c498ce3376a52ab8093f159e87f3e3642aba82110f98676e. Nov 5 15:50:50.086268 containerd[1622]: time="2025-11-05T15:50:50.086173179Z" level=info msg="StartContainer for \"b50015e1b6773c80c498ce3376a52ab8093f159e87f3e3642aba82110f98676e\" returns successfully" Nov 5 15:50:50.193222 systemd-networkd[1509]: cali49a2f84ede8: Gained IPv6LL Nov 5 15:50:50.299313 containerd[1622]: time="2025-11-05T15:50:50.299015047Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:50.300875 containerd[1622]: time="2025-11-05T15:50:50.300805271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:50:50.301014 containerd[1622]: time="2025-11-05T15:50:50.300986380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:50:50.301819 kubelet[2812]: E1105 15:50:50.301200 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:50:50.301819 kubelet[2812]: E1105 15:50:50.301274 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:50:50.301819 kubelet[2812]: E1105 15:50:50.301487 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5bmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-dc69ccdf-27hbr_calico-system(9efbb451-21a8-4af2-826d-c29a518d9d96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:50.303147 kubelet[2812]: E1105 15:50:50.303076 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc69ccdf-27hbr" podUID="9efbb451-21a8-4af2-826d-c29a518d9d96" Nov 5 15:50:50.322761 systemd-networkd[1509]: cali94e70a28816: Gained IPv6LL Nov 5 15:50:50.450155 kubelet[2812]: E1105 15:50:50.450088 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc69ccdf-27hbr" podUID="9efbb451-21a8-4af2-826d-c29a518d9d96" Nov 5 15:50:50.455584 kubelet[2812]: E1105 15:50:50.455538 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:50.458298 kubelet[2812]: E1105 15:50:50.458235 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lh7wv" podUID="3c6882bb-0885-494e-b1d2-fd2e09cf28b1" Nov 5 15:50:50.458752 kubelet[2812]: E1105 15:50:50.458703 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fldj" podUID="908be0d9-6b2b-4915-9d34-62f14a2dce18" Nov 5 15:50:50.540042 kubelet[2812]: I1105 15:50:50.539969 2812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-whfpd" podStartSLOduration=41.53994707 podStartE2EDuration="41.53994707s" podCreationTimestamp="2025-11-05 15:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:50.539174905 +0000 UTC m=+47.403850280" watchObservedRunningTime="2025-11-05 15:50:50.53994707 +0000 UTC m=+47.404622415" Nov 5 15:50:50.831907 systemd-networkd[1509]: calic5d80068a5f: Gained IPv6LL Nov 5 15:50:51.456829 kubelet[2812]: E1105 15:50:51.456790 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:51.458675 kubelet[2812]: E1105 15:50:51.458612 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc69ccdf-27hbr" podUID="9efbb451-21a8-4af2-826d-c29a518d9d96" Nov 5 15:50:51.472854 systemd-networkd[1509]: cali7782310d54d: Gained IPv6LL Nov 5 15:50:52.458105 kubelet[2812]: E1105 15:50:52.458073 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:50:55.669750 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:41052.service - OpenSSH per-connection server daemon (10.0.0.1:41052). Nov 5 15:50:55.762501 sshd[4927]: Accepted publickey for core from 10.0.0.1 port 41052 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:50:55.764856 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:55.771213 systemd-logind[1596]: New session 8 of user core. Nov 5 15:50:55.782844 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 15:50:55.977678 sshd[4930]: Connection closed by 10.0.0.1 port 41052 Nov 5 15:50:55.977918 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:55.983836 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:41052.service: Deactivated successfully. Nov 5 15:50:55.985963 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 15:50:55.986931 systemd-logind[1596]: Session 8 logged out. Waiting for processes to exit. Nov 5 15:50:55.988097 systemd-logind[1596]: Removed session 8. Nov 5 15:51:00.238972 containerd[1622]: time="2025-11-05T15:51:00.238864270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:51:00.786768 containerd[1622]: time="2025-11-05T15:51:00.786693740Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:00.855369 containerd[1622]: time="2025-11-05T15:51:00.855256031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:51:00.855369 containerd[1622]: time="2025-11-05T15:51:00.855315775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:51:00.855704 kubelet[2812]: E1105 15:51:00.855623 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:00.856056 kubelet[2812]: E1105 15:51:00.855708 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:00.856056 kubelet[2812]: E1105 15:51:00.855864 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jnlq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-698b6ffdc5-9kwmw_calico-apiserver(ace081d4-c73d-4d8d-b64e-ba5786790ea2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:00.857079 kubelet[2812]: E1105 15:51:00.857032 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-9kwmw" podUID="ace081d4-c73d-4d8d-b64e-ba5786790ea2" Nov 5 15:51:00.994672 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:56960.service - OpenSSH per-connection server daemon (10.0.0.1:56960). Nov 5 15:51:01.037310 sshd[4953]: Accepted publickey for core from 10.0.0.1 port 56960 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:01.038713 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:01.043384 systemd-logind[1596]: New session 9 of user core. Nov 5 15:51:01.050794 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 15:51:01.186049 sshd[4956]: Connection closed by 10.0.0.1 port 56960 Nov 5 15:51:01.186415 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:01.191738 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:56960.service: Deactivated successfully. Nov 5 15:51:01.194085 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 15:51:01.195027 systemd-logind[1596]: Session 9 logged out. Waiting for processes to exit. Nov 5 15:51:01.196502 systemd-logind[1596]: Removed session 9. Nov 5 15:51:01.239845 containerd[1622]: time="2025-11-05T15:51:01.239756381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:51:01.619680 containerd[1622]: time="2025-11-05T15:51:01.619609082Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:01.621074 containerd[1622]: time="2025-11-05T15:51:01.621037452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:51:01.621155 containerd[1622]: time="2025-11-05T15:51:01.621064684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:51:01.621341 kubelet[2812]: E1105 15:51:01.621284 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:51:01.621455 kubelet[2812]: E1105 15:51:01.621346 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:51:01.621547 kubelet[2812]: E1105 15:51:01.621497 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:85a8d18e420d47d0bcbe7a43e311f448,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vmkwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86c567bcd9-r5jn5_calico-system(6f1cb5a0-7db4-483a-9554-eea8e26ca91e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:01.623325 containerd[1622]: time="2025-11-05T15:51:01.623296720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:51:02.017205 containerd[1622]: time="2025-11-05T15:51:02.017028125Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:02.018447 containerd[1622]: time="2025-11-05T15:51:02.018397763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:51:02.018519 containerd[1622]: time="2025-11-05T15:51:02.018489648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:51:02.018743 kubelet[2812]: E1105 15:51:02.018674 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:51:02.018743 kubelet[2812]: E1105 15:51:02.018740 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:51:02.019189 kubelet[2812]: E1105 15:51:02.018904 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmkwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86c567bcd9-r5jn5_calico-system(6f1cb5a0-7db4-483a-9554-eea8e26ca91e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:02.020180 kubelet[2812]: E1105 15:51:02.020116 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86c567bcd9-r5jn5" podUID="6f1cb5a0-7db4-483a-9554-eea8e26ca91e" Nov 5 15:51:02.240061 containerd[1622]: time="2025-11-05T15:51:02.239997852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:51:02.633742 containerd[1622]: time="2025-11-05T15:51:02.633678972Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:02.634888 containerd[1622]: time="2025-11-05T15:51:02.634853155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:51:02.634972 containerd[1622]: time="2025-11-05T15:51:02.634933138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:51:02.635183 kubelet[2812]: E1105 15:51:02.635129 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:02.635251 kubelet[2812]: E1105 15:51:02.635192 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:02.635472 kubelet[2812]: E1105 15:51:02.635390 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ffgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-698b6ffdc5-7z4nf_calico-apiserver(e24ec55b-ca98-450e-ad08-bd8f75c310ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:02.636617 kubelet[2812]: E1105 15:51:02.636584 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-7z4nf" podUID="e24ec55b-ca98-450e-ad08-bd8f75c310ad" Nov 5 15:51:03.239605 containerd[1622]: time="2025-11-05T15:51:03.239546365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:51:03.638853 containerd[1622]: time="2025-11-05T15:51:03.638796995Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:03.640331 containerd[1622]: time="2025-11-05T15:51:03.640291459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:51:03.640331 containerd[1622]: time="2025-11-05T15:51:03.640322669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:51:03.640575 kubelet[2812]: E1105 15:51:03.640523 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:51:03.640872 kubelet[2812]: E1105 15:51:03.640593 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:51:03.640908 kubelet[2812]: E1105 15:51:03.640861 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtlfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lh7wv_calico-system(3c6882bb-0885-494e-b1d2-fd2e09cf28b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:03.641062 containerd[1622]: time="2025-11-05T15:51:03.641036562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:51:03.642100 kubelet[2812]: E1105 15:51:03.642067 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lh7wv" podUID="3c6882bb-0885-494e-b1d2-fd2e09cf28b1" Nov 5 15:51:04.005161 containerd[1622]: time="2025-11-05T15:51:04.005017734Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:04.096404 containerd[1622]: time="2025-11-05T15:51:04.096303789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:51:04.096599 containerd[1622]: time="2025-11-05T15:51:04.096359766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:51:04.096687 kubelet[2812]: E1105 15:51:04.096609 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:51:04.096769 kubelet[2812]: E1105 15:51:04.096705 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:51:04.096947 kubelet[2812]: E1105 15:51:04.096908 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5bmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-dc69ccdf-27hbr_calico-system(9efbb451-21a8-4af2-826d-c29a518d9d96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:04.098177 kubelet[2812]: E1105 15:51:04.098112 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc69ccdf-27hbr" podUID="9efbb451-21a8-4af2-826d-c29a518d9d96" Nov 5 15:51:05.239295 containerd[1622]: time="2025-11-05T15:51:05.239214043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:51:05.698850 containerd[1622]: time="2025-11-05T15:51:05.698791185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:05.719094 containerd[1622]: time="2025-11-05T15:51:05.719040561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:51:05.719193 containerd[1622]: time="2025-11-05T15:51:05.719048306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:51:05.719323 kubelet[2812]: E1105 15:51:05.719270 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:51:05.719774 kubelet[2812]: E1105 15:51:05.719324 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:51:05.719774 kubelet[2812]: E1105 15:51:05.719447 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffvt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fldj_calico-system(908be0d9-6b2b-4915-9d34-62f14a2dce18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:05.721293 containerd[1622]: time="2025-11-05T15:51:05.721265897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:51:06.179792 containerd[1622]: time="2025-11-05T15:51:06.179703782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:06.181536 containerd[1622]: time="2025-11-05T15:51:06.181433312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:51:06.181536 containerd[1622]: time="2025-11-05T15:51:06.181467416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:51:06.181825 kubelet[2812]: E1105 15:51:06.181757 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:51:06.181896 kubelet[2812]: E1105 15:51:06.181831 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:51:06.182041 kubelet[2812]: E1105 15:51:06.181983 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffvt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fldj_calico-system(908be0d9-6b2b-4915-9d34-62f14a2dce18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:06.183293 kubelet[2812]: E1105 15:51:06.183201 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fldj" podUID="908be0d9-6b2b-4915-9d34-62f14a2dce18" Nov 5 15:51:06.201199 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:56974.service - OpenSSH per-connection server daemon (10.0.0.1:56974). Nov 5 15:51:06.290779 sshd[4972]: Accepted publickey for core from 10.0.0.1 port 56974 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:06.292712 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:06.298765 systemd-logind[1596]: New session 10 of user core. Nov 5 15:51:06.309793 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:51:06.533454 sshd[4975]: Connection closed by 10.0.0.1 port 56974 Nov 5 15:51:06.533840 sshd-session[4972]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:06.539662 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:56974.service: Deactivated successfully. Nov 5 15:51:06.542132 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:51:06.543386 systemd-logind[1596]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:51:06.545164 systemd-logind[1596]: Removed session 10. Nov 5 15:51:11.552529 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:54036.service - OpenSSH per-connection server daemon (10.0.0.1:54036). Nov 5 15:51:11.629202 sshd[5000]: Accepted publickey for core from 10.0.0.1 port 54036 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:11.630986 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:11.636801 systemd-logind[1596]: New session 11 of user core. Nov 5 15:51:11.647913 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:51:12.290358 sshd[5003]: Connection closed by 10.0.0.1 port 54036 Nov 5 15:51:12.290979 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:12.302680 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:54036.service: Deactivated successfully. Nov 5 15:51:12.305397 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:51:12.306564 systemd-logind[1596]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:51:12.311833 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:54052.service - OpenSSH per-connection server daemon (10.0.0.1:54052). Nov 5 15:51:12.312768 systemd-logind[1596]: Removed session 11. Nov 5 15:51:12.387530 sshd[5017]: Accepted publickey for core from 10.0.0.1 port 54052 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:12.389724 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:12.394986 systemd-logind[1596]: New session 12 of user core. Nov 5 15:51:12.404913 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:51:12.592490 sshd[5020]: Connection closed by 10.0.0.1 port 54052 Nov 5 15:51:12.593847 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:12.608854 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:54052.service: Deactivated successfully. Nov 5 15:51:12.611793 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:51:12.612958 systemd-logind[1596]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:51:12.615783 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:54062.service - OpenSSH per-connection server daemon (10.0.0.1:54062). Nov 5 15:51:12.616447 systemd-logind[1596]: Removed session 12. Nov 5 15:51:12.683358 sshd[5032]: Accepted publickey for core from 10.0.0.1 port 54062 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:12.685095 sshd-session[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:12.690481 systemd-logind[1596]: New session 13 of user core. Nov 5 15:51:12.699802 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:51:12.943022 sshd[5035]: Connection closed by 10.0.0.1 port 54062 Nov 5 15:51:12.943391 sshd-session[5032]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:12.948775 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:54062.service: Deactivated successfully. Nov 5 15:51:12.951355 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:51:12.952551 systemd-logind[1596]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:51:12.953908 systemd-logind[1596]: Removed session 13. Nov 5 15:51:14.243984 kubelet[2812]: E1105 15:51:14.243887 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-9kwmw" podUID="ace081d4-c73d-4d8d-b64e-ba5786790ea2" Nov 5 15:51:14.245177 kubelet[2812]: E1105 15:51:14.244909 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lh7wv" podUID="3c6882bb-0885-494e-b1d2-fd2e09cf28b1" Nov 5 15:51:15.237961 kubelet[2812]: E1105 15:51:15.237610 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:51:15.240096 kubelet[2812]: E1105 15:51:15.239125 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc69ccdf-27hbr" podUID="9efbb451-21a8-4af2-826d-c29a518d9d96" Nov 5 15:51:15.502553 containerd[1622]: time="2025-11-05T15:51:15.502286173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2487af145ee0191bd4e69f85bc46ecb4b1fb66275b49fc5ea3643b170e8bd3b\" id:\"e796ee875d70ece1f4c61251d4accd64650ae071d32d7ff4a1af049c8e8b90e0\" pid:5058 exited_at:{seconds:1762357875 nanos:501879501}" Nov 5 15:51:15.514037 kubelet[2812]: E1105 15:51:15.513991 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:51:17.239662 kubelet[2812]: E1105 15:51:17.239279 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-7z4nf" podUID="e24ec55b-ca98-450e-ad08-bd8f75c310ad" Nov 5 15:51:17.240883 kubelet[2812]: E1105 15:51:17.240821 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86c567bcd9-r5jn5" podUID="6f1cb5a0-7db4-483a-9554-eea8e26ca91e" Nov 5 15:51:17.960780 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:54068.service - OpenSSH per-connection server daemon (10.0.0.1:54068). Nov 5 15:51:18.059490 sshd[5073]: Accepted publickey for core from 10.0.0.1 port 54068 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:18.062600 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:18.074932 systemd-logind[1596]: New session 14 of user core. Nov 5 15:51:18.081881 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:51:18.244353 sshd[5077]: Connection closed by 10.0.0.1 port 54068 Nov 5 15:51:18.244761 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:18.251408 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:54068.service: Deactivated successfully. Nov 5 15:51:18.253946 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:51:18.254894 systemd-logind[1596]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:51:18.256040 systemd-logind[1596]: Removed session 14. Nov 5 15:51:20.239274 kubelet[2812]: E1105 15:51:20.239201 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fldj" podUID="908be0d9-6b2b-4915-9d34-62f14a2dce18" Nov 5 15:51:23.259415 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:36794.service - OpenSSH per-connection server daemon (10.0.0.1:36794). Nov 5 15:51:23.343373 sshd[5094]: Accepted publickey for core from 10.0.0.1 port 36794 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:23.346263 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:23.357004 systemd-logind[1596]: New session 15 of user core. Nov 5 15:51:23.362521 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:51:23.560453 sshd[5097]: Connection closed by 10.0.0.1 port 36794 Nov 5 15:51:23.560979 sshd-session[5094]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:23.567622 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:36794.service: Deactivated successfully. Nov 5 15:51:23.570515 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:51:23.572994 systemd-logind[1596]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:51:23.576493 systemd-logind[1596]: Removed session 15. Nov 5 15:51:28.240554 containerd[1622]: time="2025-11-05T15:51:28.240432596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:51:28.574825 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:36800.service - OpenSSH per-connection server daemon (10.0.0.1:36800). Nov 5 15:51:28.606498 containerd[1622]: time="2025-11-05T15:51:28.606451898Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:28.608857 containerd[1622]: time="2025-11-05T15:51:28.608771173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:51:28.608937 containerd[1622]: time="2025-11-05T15:51:28.608802602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:51:28.609195 kubelet[2812]: E1105 15:51:28.609137 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:51:28.609665 kubelet[2812]: E1105 15:51:28.609210 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:51:28.609665 kubelet[2812]: E1105 15:51:28.609475 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtlfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lh7wv_calico-system(3c6882bb-0885-494e-b1d2-fd2e09cf28b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:28.610287 containerd[1622]: time="2025-11-05T15:51:28.610218936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:51:28.611062 kubelet[2812]: E1105 15:51:28.610937 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lh7wv" podUID="3c6882bb-0885-494e-b1d2-fd2e09cf28b1" Nov 5 15:51:28.640845 sshd[5116]: Accepted publickey for core from 10.0.0.1 port 36800 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:28.642702 sshd-session[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:28.648062 systemd-logind[1596]: New session 16 of user core. Nov 5 15:51:28.657912 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:51:28.784657 sshd[5119]: Connection closed by 10.0.0.1 port 36800 Nov 5 15:51:28.784286 sshd-session[5116]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:28.790981 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:36800.service: Deactivated successfully. Nov 5 15:51:28.795766 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:51:28.796693 systemd-logind[1596]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:51:28.799213 systemd-logind[1596]: Removed session 16. Nov 5 15:51:28.920358 containerd[1622]: time="2025-11-05T15:51:28.920150358Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:28.923385 containerd[1622]: time="2025-11-05T15:51:28.923245814Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:51:28.923385 containerd[1622]: time="2025-11-05T15:51:28.923319062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:51:28.923872 kubelet[2812]: E1105 15:51:28.923794 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:51:28.923872 kubelet[2812]: E1105 15:51:28.923873 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:51:28.924395 kubelet[2812]: E1105 15:51:28.924329 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:85a8d18e420d47d0bcbe7a43e311f448,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vmkwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86c567bcd9-r5jn5_calico-system(6f1cb5a0-7db4-483a-9554-eea8e26ca91e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:28.926964 containerd[1622]: time="2025-11-05T15:51:28.926917802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:51:29.258833 containerd[1622]: time="2025-11-05T15:51:29.258650145Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:29.261485 containerd[1622]: time="2025-11-05T15:51:29.260377197Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:51:29.261485 containerd[1622]: time="2025-11-05T15:51:29.260447700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:51:29.261699 kubelet[2812]: E1105 15:51:29.261381 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:51:29.261758 kubelet[2812]: E1105 15:51:29.261725 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:51:29.262076 kubelet[2812]: E1105 15:51:29.261966 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmkwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86c567bcd9-r5jn5_calico-system(6f1cb5a0-7db4-483a-9554-eea8e26ca91e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:29.262341 containerd[1622]: time="2025-11-05T15:51:29.262290080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:51:29.263142 kubelet[2812]: E1105 15:51:29.263089 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86c567bcd9-r5jn5" podUID="6f1cb5a0-7db4-483a-9554-eea8e26ca91e" Nov 5 15:51:29.609715 containerd[1622]: time="2025-11-05T15:51:29.609527370Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:29.723956 containerd[1622]: time="2025-11-05T15:51:29.723895579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:51:29.724309 containerd[1622]: time="2025-11-05T15:51:29.723935254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:51:29.725339 kubelet[2812]: E1105 15:51:29.724758 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:29.725339 kubelet[2812]: E1105 15:51:29.724824 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:29.725339 kubelet[2812]: E1105 15:51:29.724969 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jnlq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-698b6ffdc5-9kwmw_calico-apiserver(ace081d4-c73d-4d8d-b64e-ba5786790ea2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:29.726763 kubelet[2812]: E1105 15:51:29.726711 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-9kwmw" podUID="ace081d4-c73d-4d8d-b64e-ba5786790ea2" Nov 5 15:51:30.238425 kubelet[2812]: E1105 15:51:30.238349 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:51:30.239531 containerd[1622]: time="2025-11-05T15:51:30.239422343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:51:30.636667 containerd[1622]: time="2025-11-05T15:51:30.636563627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:30.783836 containerd[1622]: time="2025-11-05T15:51:30.783755636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:51:30.784035 containerd[1622]: time="2025-11-05T15:51:30.783860054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:51:30.784086 kubelet[2812]: E1105 15:51:30.784035 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:51:30.784517 kubelet[2812]: E1105 15:51:30.784098 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:51:30.784517 kubelet[2812]: E1105 15:51:30.784275 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5bmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-dc69ccdf-27hbr_calico-system(9efbb451-21a8-4af2-826d-c29a518d9d96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:30.785533 kubelet[2812]: E1105 15:51:30.785459 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc69ccdf-27hbr" podUID="9efbb451-21a8-4af2-826d-c29a518d9d96" Nov 5 15:51:31.240297 containerd[1622]: time="2025-11-05T15:51:31.240241460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:51:31.651885 containerd[1622]: time="2025-11-05T15:51:31.651813500Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:31.653225 containerd[1622]: time="2025-11-05T15:51:31.653182813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:51:31.653294 containerd[1622]: time="2025-11-05T15:51:31.653231225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:51:31.653620 kubelet[2812]: E1105 15:51:31.653530 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:31.653620 kubelet[2812]: E1105 15:51:31.653604 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:31.653913 kubelet[2812]: E1105 15:51:31.653815 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ffgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-698b6ffdc5-7z4nf_calico-apiserver(e24ec55b-ca98-450e-ad08-bd8f75c310ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:31.655096 kubelet[2812]: E1105 15:51:31.655042 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-7z4nf" podUID="e24ec55b-ca98-450e-ad08-bd8f75c310ad" Nov 5 15:51:32.238840 kubelet[2812]: E1105 15:51:32.238702 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:51:33.811603 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:49718.service - OpenSSH per-connection server daemon (10.0.0.1:49718). Nov 5 15:51:33.887251 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 49718 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:33.891789 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:33.915291 systemd-logind[1596]: New session 17 of user core. Nov 5 15:51:33.923916 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:51:34.113112 sshd[5138]: Connection closed by 10.0.0.1 port 49718 Nov 5 15:51:34.112330 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:34.120354 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:49718.service: Deactivated successfully. Nov 5 15:51:34.123420 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:51:34.127079 systemd-logind[1596]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:51:34.128530 systemd-logind[1596]: Removed session 17. Nov 5 15:51:34.243755 containerd[1622]: time="2025-11-05T15:51:34.243655073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:51:34.564883 containerd[1622]: time="2025-11-05T15:51:34.564791786Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:34.596473 containerd[1622]: time="2025-11-05T15:51:34.596370757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:51:34.596613 containerd[1622]: time="2025-11-05T15:51:34.596465106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:51:34.596901 kubelet[2812]: E1105 15:51:34.596836 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:51:34.596901 kubelet[2812]: E1105 15:51:34.596891 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:51:34.597511 kubelet[2812]: E1105 15:51:34.597018 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffvt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fldj_calico-system(908be0d9-6b2b-4915-9d34-62f14a2dce18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:34.599283 containerd[1622]: time="2025-11-05T15:51:34.599028007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:51:35.139883 containerd[1622]: time="2025-11-05T15:51:35.139821811Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:35.197772 containerd[1622]: time="2025-11-05T15:51:35.197681946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:51:35.198106 kubelet[2812]: E1105 15:51:35.198031 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:51:35.198214 kubelet[2812]: E1105 15:51:35.198111 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:51:35.198301 kubelet[2812]: E1105 15:51:35.198248 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffvt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5fldj_calico-system(908be0d9-6b2b-4915-9d34-62f14a2dce18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:35.199883 kubelet[2812]: E1105 15:51:35.199847 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fldj" podUID="908be0d9-6b2b-4915-9d34-62f14a2dce18" Nov 5 15:51:35.203612 containerd[1622]: time="2025-11-05T15:51:35.203556053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:51:39.142055 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:49722.service - OpenSSH per-connection server daemon (10.0.0.1:49722). Nov 5 15:51:39.458417 sshd[5152]: Accepted publickey for core from 10.0.0.1 port 49722 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:39.457988 sshd-session[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:39.478294 systemd-logind[1596]: New session 18 of user core. Nov 5 15:51:39.504722 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:51:39.801745 sshd[5156]: Connection closed by 10.0.0.1 port 49722 Nov 5 15:51:39.802096 sshd-session[5152]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:39.826171 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:49722.service: Deactivated successfully. Nov 5 15:51:39.832494 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:51:39.838898 systemd-logind[1596]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:51:39.855395 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:49734.service - OpenSSH per-connection server daemon (10.0.0.1:49734). Nov 5 15:51:39.870860 systemd-logind[1596]: Removed session 18. Nov 5 15:51:39.985156 sshd[5170]: Accepted publickey for core from 10.0.0.1 port 49734 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:39.991310 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:40.016578 systemd-logind[1596]: New session 19 of user core. Nov 5 15:51:40.036039 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:51:40.243336 kubelet[2812]: E1105 15:51:40.242815 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86c567bcd9-r5jn5" podUID="6f1cb5a0-7db4-483a-9554-eea8e26ca91e" Nov 5 15:51:41.413813 sshd[5175]: Connection closed by 10.0.0.1 port 49734 Nov 5 15:51:41.414312 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:41.431512 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:49734.service: Deactivated successfully. Nov 5 15:51:41.435249 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:51:41.437326 systemd-logind[1596]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:51:41.443555 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:50858.service - OpenSSH per-connection server daemon (10.0.0.1:50858). Nov 5 15:51:41.445470 systemd-logind[1596]: Removed session 19. Nov 5 15:51:41.514944 sshd[5186]: Accepted publickey for core from 10.0.0.1 port 50858 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:41.521529 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:41.532516 systemd-logind[1596]: New session 20 of user core. Nov 5 15:51:41.555569 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:51:42.243860 kubelet[2812]: E1105 15:51:42.243449 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lh7wv" podUID="3c6882bb-0885-494e-b1d2-fd2e09cf28b1" Nov 5 15:51:42.722954 sshd[5189]: Connection closed by 10.0.0.1 port 50858 Nov 5 15:51:42.721311 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:42.748008 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:50858.service: Deactivated successfully. Nov 5 15:51:42.751497 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:51:42.758858 systemd-logind[1596]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:51:42.770262 systemd[1]: Started sshd@20-10.0.0.50:22-10.0.0.1:50860.service - OpenSSH per-connection server daemon (10.0.0.1:50860). Nov 5 15:51:42.781778 systemd-logind[1596]: Removed session 20. Nov 5 15:51:42.950925 sshd[5217]: Accepted publickey for core from 10.0.0.1 port 50860 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:42.952986 sshd-session[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:42.959316 systemd-logind[1596]: New session 21 of user core. Nov 5 15:51:42.976976 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:51:43.242726 kubelet[2812]: E1105 15:51:43.242515 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc69ccdf-27hbr" podUID="9efbb451-21a8-4af2-826d-c29a518d9d96" Nov 5 15:51:43.732803 sshd[5220]: Connection closed by 10.0.0.1 port 50860 Nov 5 15:51:43.737479 sshd-session[5217]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:43.778453 systemd[1]: sshd@20-10.0.0.50:22-10.0.0.1:50860.service: Deactivated successfully. Nov 5 15:51:43.787950 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:51:43.791799 systemd-logind[1596]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:51:43.802610 systemd[1]: Started sshd@21-10.0.0.50:22-10.0.0.1:50866.service - OpenSSH per-connection server daemon (10.0.0.1:50866). Nov 5 15:51:43.810195 systemd-logind[1596]: Removed session 21. Nov 5 15:51:43.942509 sshd[5231]: Accepted publickey for core from 10.0.0.1 port 50866 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:43.946834 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:43.972380 systemd-logind[1596]: New session 22 of user core. Nov 5 15:51:43.990188 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:51:44.242298 kubelet[2812]: E1105 15:51:44.241881 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:51:44.243011 kubelet[2812]: E1105 15:51:44.242563 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:51:44.244508 kubelet[2812]: E1105 15:51:44.244369 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-9kwmw" podUID="ace081d4-c73d-4d8d-b64e-ba5786790ea2" Nov 5 15:51:44.262957 sshd[5234]: Connection closed by 10.0.0.1 port 50866 Nov 5 15:51:44.263653 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:44.276600 systemd[1]: sshd@21-10.0.0.50:22-10.0.0.1:50866.service: Deactivated successfully. Nov 5 15:51:44.287568 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:51:44.301027 systemd-logind[1596]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:51:44.317930 systemd-logind[1596]: Removed session 22. Nov 5 15:51:45.243018 kubelet[2812]: E1105 15:51:45.242936 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-7z4nf" podUID="e24ec55b-ca98-450e-ad08-bd8f75c310ad" Nov 5 15:51:45.727879 containerd[1622]: time="2025-11-05T15:51:45.727811494Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2487af145ee0191bd4e69f85bc46ecb4b1fb66275b49fc5ea3643b170e8bd3b\" id:\"fc77734aa7bade5865457377a6e277c85a1d1e889ee90a22189ca37667651fde\" pid:5259 exited_at:{seconds:1762357905 nanos:725555461}" Nov 5 15:51:46.256043 kubelet[2812]: E1105 15:51:46.255143 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fldj" podUID="908be0d9-6b2b-4915-9d34-62f14a2dce18" Nov 5 15:51:46.292070 kernel: hrtimer: interrupt took 6637592 ns Nov 5 15:51:49.294982 systemd[1]: Started sshd@22-10.0.0.50:22-10.0.0.1:50876.service - OpenSSH per-connection server daemon (10.0.0.1:50876). Nov 5 15:51:49.430220 sshd[5273]: Accepted publickey for core from 10.0.0.1 port 50876 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:49.439340 sshd-session[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:49.489055 systemd-logind[1596]: New session 23 of user core. Nov 5 15:51:49.516981 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 15:51:49.766066 sshd[5276]: Connection closed by 10.0.0.1 port 50876 Nov 5 15:51:49.768941 sshd-session[5273]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:49.783682 systemd[1]: sshd@22-10.0.0.50:22-10.0.0.1:50876.service: Deactivated successfully. Nov 5 15:51:49.795500 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 15:51:49.801462 systemd-logind[1596]: Session 23 logged out. Waiting for processes to exit. Nov 5 15:51:49.804027 systemd-logind[1596]: Removed session 23. Nov 5 15:51:53.240431 kubelet[2812]: E1105 15:51:53.240297 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:51:54.241325 kubelet[2812]: E1105 15:51:54.239668 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lh7wv" podUID="3c6882bb-0885-494e-b1d2-fd2e09cf28b1" Nov 5 15:51:54.789926 systemd[1]: Started sshd@23-10.0.0.50:22-10.0.0.1:54916.service - OpenSSH per-connection server daemon (10.0.0.1:54916). Nov 5 15:51:54.868395 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 54916 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:51:54.869254 sshd-session[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:54.883795 systemd-logind[1596]: New session 24 of user core. Nov 5 15:51:54.896024 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 15:51:55.125994 sshd[5293]: Connection closed by 10.0.0.1 port 54916 Nov 5 15:51:55.127000 sshd-session[5290]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:55.136222 systemd[1]: sshd@23-10.0.0.50:22-10.0.0.1:54916.service: Deactivated successfully. Nov 5 15:51:55.140498 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 15:51:55.142539 systemd-logind[1596]: Session 24 logged out. Waiting for processes to exit. Nov 5 15:51:55.144751 systemd-logind[1596]: Removed session 24. Nov 5 15:51:55.245245 kubelet[2812]: E1105 15:51:55.243598 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86c567bcd9-r5jn5" podUID="6f1cb5a0-7db4-483a-9554-eea8e26ca91e" Nov 5 15:51:56.248429 kubelet[2812]: E1105 15:51:56.247060 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-9kwmw" podUID="ace081d4-c73d-4d8d-b64e-ba5786790ea2" Nov 5 15:51:56.253549 kubelet[2812]: E1105 15:51:56.253299 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc69ccdf-27hbr" podUID="9efbb451-21a8-4af2-826d-c29a518d9d96" Nov 5 15:51:59.244416 kubelet[2812]: E1105 15:51:59.243904 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fldj" podUID="908be0d9-6b2b-4915-9d34-62f14a2dce18" Nov 5 15:52:00.156002 systemd[1]: Started sshd@24-10.0.0.50:22-10.0.0.1:58958.service - OpenSSH per-connection server daemon (10.0.0.1:58958). Nov 5 15:52:00.245470 kubelet[2812]: E1105 15:52:00.244811 2812 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:52:00.248507 kubelet[2812]: E1105 15:52:00.245066 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-7z4nf" podUID="e24ec55b-ca98-450e-ad08-bd8f75c310ad" Nov 5 15:52:00.324965 sshd[5308]: Accepted publickey for core from 10.0.0.1 port 58958 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:52:00.330264 sshd-session[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:00.342786 systemd-logind[1596]: New session 25 of user core. Nov 5 15:52:00.355734 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 15:52:00.705156 sshd[5311]: Connection closed by 10.0.0.1 port 58958 Nov 5 15:52:00.710176 sshd-session[5308]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:00.737373 systemd-logind[1596]: Session 25 logged out. Waiting for processes to exit. Nov 5 15:52:00.738570 systemd[1]: sshd@24-10.0.0.50:22-10.0.0.1:58958.service: Deactivated successfully. Nov 5 15:52:00.752038 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 15:52:00.780844 systemd-logind[1596]: Removed session 25. Nov 5 15:52:05.725595 systemd[1]: Started sshd@25-10.0.0.50:22-10.0.0.1:58960.service - OpenSSH per-connection server daemon (10.0.0.1:58960). Nov 5 15:52:05.882875 sshd[5327]: Accepted publickey for core from 10.0.0.1 port 58960 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:52:05.885800 sshd-session[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:05.892402 systemd-logind[1596]: New session 26 of user core. Nov 5 15:52:05.902011 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 15:52:06.153448 sshd[5330]: Connection closed by 10.0.0.1 port 58960 Nov 5 15:52:06.155437 sshd-session[5327]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:06.161517 systemd-logind[1596]: Session 26 logged out. Waiting for processes to exit. Nov 5 15:52:06.162011 systemd[1]: sshd@25-10.0.0.50:22-10.0.0.1:58960.service: Deactivated successfully. Nov 5 15:52:06.166242 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 15:52:06.169337 systemd-logind[1596]: Removed session 26. Nov 5 15:52:08.244964 kubelet[2812]: E1105 15:52:08.244852 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dc69ccdf-27hbr" podUID="9efbb451-21a8-4af2-826d-c29a518d9d96" Nov 5 15:52:08.246905 kubelet[2812]: E1105 15:52:08.245863 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-9kwmw" podUID="ace081d4-c73d-4d8d-b64e-ba5786790ea2" Nov 5 15:52:08.246905 kubelet[2812]: E1105 15:52:08.246342 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86c567bcd9-r5jn5" podUID="6f1cb5a0-7db4-483a-9554-eea8e26ca91e" Nov 5 15:52:09.250344 containerd[1622]: time="2025-11-05T15:52:09.250231493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:52:09.809007 containerd[1622]: time="2025-11-05T15:52:09.808681564Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:09.811079 containerd[1622]: time="2025-11-05T15:52:09.810951114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:52:09.811486 containerd[1622]: time="2025-11-05T15:52:09.811058496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:52:09.812080 kubelet[2812]: E1105 15:52:09.812032 2812 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:52:09.812080 kubelet[2812]: E1105 15:52:09.812135 2812 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:52:09.812947 kubelet[2812]: E1105 15:52:09.812449 2812 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtlfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lh7wv_calico-system(3c6882bb-0885-494e-b1d2-fd2e09cf28b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:09.813584 kubelet[2812]: E1105 15:52:09.813551 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lh7wv" podUID="3c6882bb-0885-494e-b1d2-fd2e09cf28b1" Nov 5 15:52:11.200786 systemd[1]: Started sshd@26-10.0.0.50:22-10.0.0.1:37402.service - OpenSSH per-connection server daemon (10.0.0.1:37402). Nov 5 15:52:11.256497 kubelet[2812]: E1105 15:52:11.256396 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-698b6ffdc5-7z4nf" podUID="e24ec55b-ca98-450e-ad08-bd8f75c310ad" Nov 5 15:52:11.433676 sshd[5353]: Accepted publickey for core from 10.0.0.1 port 37402 ssh2: RSA SHA256:ss4QIDziwO12RKL6RqqyjhG0K/AGjytasvZRYPZ5Eq4 Nov 5 15:52:11.439868 sshd-session[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:11.467982 systemd-logind[1596]: New session 27 of user core. Nov 5 15:52:11.472982 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 5 15:52:11.888134 sshd[5356]: Connection closed by 10.0.0.1 port 37402 Nov 5 15:52:11.889956 sshd-session[5353]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:11.906599 systemd-logind[1596]: Session 27 logged out. Waiting for processes to exit. Nov 5 15:52:11.914219 systemd[1]: sshd@26-10.0.0.50:22-10.0.0.1:37402.service: Deactivated successfully. Nov 5 15:52:11.922452 systemd[1]: session-27.scope: Deactivated successfully. Nov 5 15:52:11.928404 systemd-logind[1596]: Removed session 27. Nov 5 15:52:13.245999 kubelet[2812]: E1105 15:52:13.245916 2812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5fldj" podUID="908be0d9-6b2b-4915-9d34-62f14a2dce18"