Nov 4 23:45:38.532340 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 22:00:22 -00 2025 Nov 4 23:45:38.532372 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:45:38.532381 kernel: BIOS-provided physical RAM map: Nov 4 23:45:38.532388 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 4 23:45:38.532395 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 4 23:45:38.532409 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Nov 4 23:45:38.532417 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 4 23:45:38.532424 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Nov 4 23:45:38.532434 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 4 23:45:38.532441 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 4 23:45:38.532448 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 4 23:45:38.532455 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 4 23:45:38.532462 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 4 23:45:38.532476 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 4 23:45:38.532484 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 4 23:45:38.532492 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 4 23:45:38.532502 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 4 23:45:38.532515 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 23:45:38.532523 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 4 23:45:38.532530 kernel: NX (Execute Disable) protection: active Nov 4 23:45:38.532538 kernel: APIC: Static calls initialized Nov 4 23:45:38.532545 kernel: e820: update [mem 0x9a13d018-0x9a146c57] usable ==> usable Nov 4 23:45:38.532553 kernel: e820: update [mem 0x9a100018-0x9a13ce57] usable ==> usable Nov 4 23:45:38.532560 kernel: extended physical RAM map: Nov 4 23:45:38.532568 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 4 23:45:38.532575 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 4 23:45:38.532582 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Nov 4 23:45:38.532590 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 4 23:45:38.532604 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a100017] usable Nov 4 23:45:38.532611 kernel: reserve setup_data: [mem 0x000000009a100018-0x000000009a13ce57] usable Nov 4 23:45:38.532618 kernel: reserve setup_data: [mem 0x000000009a13ce58-0x000000009a13d017] usable Nov 4 23:45:38.532626 kernel: reserve setup_data: [mem 0x000000009a13d018-0x000000009a146c57] usable Nov 4 23:45:38.532633 kernel: reserve setup_data: [mem 0x000000009a146c58-0x000000009b8ecfff] usable Nov 4 23:45:38.532640 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 4 23:45:38.532648 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 4 23:45:38.532655 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 4 23:45:38.532664 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 4 23:45:38.532672 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 4 23:45:38.532688 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 4 23:45:38.532696 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 4 23:45:38.532711 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 4 23:45:38.532718 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 4 23:45:38.532726 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 23:45:38.532740 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 4 23:45:38.532747 kernel: efi: EFI v2.7 by EDK II Nov 4 23:45:38.532755 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Nov 4 23:45:38.532763 kernel: random: crng init done Nov 4 23:45:38.532771 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Nov 4 23:45:38.532778 kernel: secureboot: Secure boot enabled Nov 4 23:45:38.532786 kernel: SMBIOS 2.8 present. Nov 4 23:45:38.532803 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 4 23:45:38.532811 kernel: DMI: Memory slots populated: 1/1 Nov 4 23:45:38.532825 kernel: Hypervisor detected: KVM Nov 4 23:45:38.532833 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 4 23:45:38.532840 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 23:45:38.532848 kernel: kvm-clock: using sched offset of 6804120432 cycles Nov 4 23:45:38.532856 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 23:45:38.532865 kernel: tsc: Detected 2794.750 MHz processor Nov 4 23:45:38.532874 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 23:45:38.532882 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 23:45:38.532890 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 4 23:45:38.532946 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 4 23:45:38.532956 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 23:45:38.532967 kernel: Using GB pages for direct mapping Nov 4 23:45:38.532975 kernel: ACPI: Early table checksum verification disabled Nov 4 23:45:38.532983 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Nov 4 23:45:38.532992 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 4 23:45:38.533000 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:45:38.533016 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:45:38.533024 kernel: ACPI: FACS 0x000000009BBDD000 000040 Nov 4 23:45:38.533032 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:45:38.533040 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:45:38.533048 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:45:38.533057 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:45:38.533065 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 4 23:45:38.533079 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Nov 4 23:45:38.533087 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Nov 4 23:45:38.533095 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Nov 4 23:45:38.533103 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Nov 4 23:45:38.533111 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Nov 4 23:45:38.533119 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Nov 4 23:45:38.533127 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Nov 4 23:45:38.533135 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Nov 4 23:45:38.533150 kernel: No NUMA configuration found Nov 4 23:45:38.533158 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Nov 4 23:45:38.533166 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Nov 4 23:45:38.533174 kernel: Zone ranges: Nov 4 23:45:38.533183 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 23:45:38.533191 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Nov 4 23:45:38.533199 kernel: Normal empty Nov 4 23:45:38.533213 kernel: Device empty Nov 4 23:45:38.533221 kernel: Movable zone start for each node Nov 4 23:45:38.533229 kernel: Early memory node ranges Nov 4 23:45:38.533237 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Nov 4 23:45:38.533245 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Nov 4 23:45:38.533253 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Nov 4 23:45:38.533261 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Nov 4 23:45:38.533269 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Nov 4 23:45:38.533283 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Nov 4 23:45:38.533291 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 23:45:38.533299 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Nov 4 23:45:38.533308 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 4 23:45:38.533316 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 4 23:45:38.533324 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 4 23:45:38.533332 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Nov 4 23:45:38.533347 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 4 23:45:38.533355 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 23:45:38.533363 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 23:45:38.533371 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 4 23:45:38.533381 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 23:45:38.533390 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 23:45:38.533398 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 23:45:38.533413 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 23:45:38.533421 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 23:45:38.533429 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 4 23:45:38.533437 kernel: TSC deadline timer available Nov 4 23:45:38.533445 kernel: CPU topo: Max. logical packages: 1 Nov 4 23:45:38.533454 kernel: CPU topo: Max. logical dies: 1 Nov 4 23:45:38.533487 kernel: CPU topo: Max. dies per package: 1 Nov 4 23:45:38.533495 kernel: CPU topo: Max. threads per core: 1 Nov 4 23:45:38.533504 kernel: CPU topo: Num. cores per package: 4 Nov 4 23:45:38.533512 kernel: CPU topo: Num. threads per package: 4 Nov 4 23:45:38.533529 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 4 23:45:38.533537 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 4 23:45:38.533545 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 4 23:45:38.533554 kernel: kvm-guest: setup PV sched yield Nov 4 23:45:38.533569 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 4 23:45:38.533577 kernel: Booting paravirtualized kernel on KVM Nov 4 23:45:38.533586 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 23:45:38.533595 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 4 23:45:38.533603 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 4 23:45:38.533612 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 4 23:45:38.533620 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 4 23:45:38.533635 kernel: kvm-guest: PV spinlocks enabled Nov 4 23:45:38.533644 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 4 23:45:38.533653 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:45:38.533662 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 23:45:38.533671 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 23:45:38.533679 kernel: Fallback order for Node 0: 0 Nov 4 23:45:38.533694 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Nov 4 23:45:38.533703 kernel: Policy zone: DMA32 Nov 4 23:45:38.533711 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 23:45:38.533720 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 4 23:45:38.533728 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 23:45:38.533737 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 23:45:38.533745 kernel: Dynamic Preempt: voluntary Nov 4 23:45:38.533760 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 23:45:38.533769 kernel: rcu: RCU event tracing is enabled. Nov 4 23:45:38.533777 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 4 23:45:38.533786 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 23:45:38.533803 kernel: Rude variant of Tasks RCU enabled. Nov 4 23:45:38.533811 kernel: Tracing variant of Tasks RCU enabled. Nov 4 23:45:38.533820 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 23:45:38.533828 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 4 23:45:38.533848 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 23:45:38.533856 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 23:45:38.533867 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 23:45:38.533876 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 4 23:45:38.533885 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 23:45:38.533894 kernel: Console: colour dummy device 80x25 Nov 4 23:45:38.533913 kernel: printk: legacy console [ttyS0] enabled Nov 4 23:45:38.534115 kernel: ACPI: Core revision 20240827 Nov 4 23:45:38.534124 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 4 23:45:38.534132 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 23:45:38.534140 kernel: x2apic enabled Nov 4 23:45:38.534149 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 23:45:38.534157 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 4 23:45:38.534166 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 4 23:45:38.534181 kernel: kvm-guest: setup PV IPIs Nov 4 23:45:38.534189 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 23:45:38.534198 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 4 23:45:38.534207 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 4 23:45:38.534215 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 4 23:45:38.534224 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 4 23:45:38.534232 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 4 23:45:38.534248 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 23:45:38.534261 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 23:45:38.534278 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 23:45:38.534287 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 4 23:45:38.534296 kernel: active return thunk: retbleed_return_thunk Nov 4 23:45:38.534305 kernel: RETBleed: Mitigation: untrained return thunk Nov 4 23:45:38.534313 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 23:45:38.534337 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 23:45:38.534347 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 4 23:45:38.534358 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 4 23:45:38.534368 kernel: active return thunk: srso_return_thunk Nov 4 23:45:38.534378 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 4 23:45:38.534388 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 23:45:38.534399 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 23:45:38.534420 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 23:45:38.534431 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 23:45:38.534443 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 4 23:45:38.534454 kernel: Freeing SMP alternatives memory: 32K Nov 4 23:45:38.534466 kernel: pid_max: default: 32768 minimum: 301 Nov 4 23:45:38.534477 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 23:45:38.534489 kernel: landlock: Up and running. Nov 4 23:45:38.534509 kernel: SELinux: Initializing. Nov 4 23:45:38.534520 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 23:45:38.534532 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 23:45:38.534544 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 4 23:45:38.534556 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 4 23:45:38.534568 kernel: ... version: 0 Nov 4 23:45:38.534582 kernel: ... bit width: 48 Nov 4 23:45:38.534602 kernel: ... generic registers: 6 Nov 4 23:45:38.534613 kernel: ... value mask: 0000ffffffffffff Nov 4 23:45:38.534627 kernel: ... max period: 00007fffffffffff Nov 4 23:45:38.534640 kernel: ... fixed-purpose events: 0 Nov 4 23:45:38.534654 kernel: ... event mask: 000000000000003f Nov 4 23:45:38.534665 kernel: signal: max sigframe size: 1776 Nov 4 23:45:38.534677 kernel: rcu: Hierarchical SRCU implementation. Nov 4 23:45:38.534698 kernel: rcu: Max phase no-delay instances is 400. Nov 4 23:45:38.534710 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 23:45:38.534721 kernel: smp: Bringing up secondary CPUs ... Nov 4 23:45:38.534733 kernel: smpboot: x86: Booting SMP configuration: Nov 4 23:45:38.534745 kernel: .... node #0, CPUs: #1 #2 #3 Nov 4 23:45:38.534756 kernel: smp: Brought up 1 node, 4 CPUs Nov 4 23:45:38.534768 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 4 23:45:38.534790 kernel: Memory: 2431736K/2552216K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 114544K reserved, 0K cma-reserved) Nov 4 23:45:38.534811 kernel: devtmpfs: initialized Nov 4 23:45:38.534823 kernel: x86/mm: Memory block size: 128MB Nov 4 23:45:38.534835 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Nov 4 23:45:38.534847 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Nov 4 23:45:38.534860 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 23:45:38.534872 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 4 23:45:38.534895 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 23:45:38.534948 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 23:45:38.534958 kernel: audit: initializing netlink subsys (disabled) Nov 4 23:45:38.534966 kernel: audit: type=2000 audit(1762299935.319:1): state=initialized audit_enabled=0 res=1 Nov 4 23:45:38.534975 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 23:45:38.534984 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 23:45:38.534993 kernel: cpuidle: using governor menu Nov 4 23:45:38.535010 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 23:45:38.535019 kernel: dca service started, version 1.12.1 Nov 4 23:45:38.535027 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 4 23:45:38.535036 kernel: PCI: Using configuration type 1 for base access Nov 4 23:45:38.535044 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 23:45:38.535053 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 23:45:38.535062 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 23:45:38.535077 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 23:45:38.535086 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 23:45:38.535094 kernel: ACPI: Added _OSI(Module Device) Nov 4 23:45:38.535103 kernel: ACPI: Added _OSI(Processor Device) Nov 4 23:45:38.535111 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 23:45:38.535119 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 23:45:38.535128 kernel: ACPI: Interpreter enabled Nov 4 23:45:38.535136 kernel: ACPI: PM: (supports S0 S5) Nov 4 23:45:38.535151 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 23:45:38.535160 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 23:45:38.535169 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 23:45:38.535179 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 4 23:45:38.535188 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 23:45:38.535476 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 23:45:38.535674 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 4 23:45:38.535863 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 4 23:45:38.535876 kernel: PCI host bridge to bus 0000:00 Nov 4 23:45:38.536083 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 23:45:38.536248 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 23:45:38.536409 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 23:45:38.536585 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 4 23:45:38.536745 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 4 23:45:38.536932 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 4 23:45:38.537096 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 23:45:38.537368 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 4 23:45:38.537594 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 4 23:45:38.537774 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 4 23:45:38.538000 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 4 23:45:38.538194 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 4 23:45:38.538384 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 23:45:38.538579 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 23:45:38.538921 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 4 23:45:38.539101 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 4 23:45:38.539302 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 4 23:45:38.539499 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 23:45:38.539675 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 4 23:45:38.539886 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 4 23:45:38.540196 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 4 23:45:38.540388 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 23:45:38.542131 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 4 23:45:38.542364 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 4 23:45:38.542560 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 4 23:45:38.542778 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 4 23:45:38.543018 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 4 23:45:38.543214 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 4 23:45:38.543427 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 4 23:45:38.543619 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 4 23:45:38.543826 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 4 23:45:38.544074 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 4 23:45:38.544267 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 4 23:45:38.544282 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 23:45:38.544293 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 23:45:38.544304 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 23:45:38.544315 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 23:45:38.544338 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 4 23:45:38.544348 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 4 23:45:38.544359 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 4 23:45:38.544370 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 4 23:45:38.544384 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 4 23:45:38.544395 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 4 23:45:38.544406 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 4 23:45:38.544424 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 4 23:45:38.544435 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 4 23:45:38.544446 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 4 23:45:38.544456 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 4 23:45:38.544467 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 4 23:45:38.544478 kernel: iommu: Default domain type: Translated Nov 4 23:45:38.544489 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 23:45:38.544507 kernel: efivars: Registered efivars operations Nov 4 23:45:38.544518 kernel: PCI: Using ACPI for IRQ routing Nov 4 23:45:38.544529 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 23:45:38.544540 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Nov 4 23:45:38.544551 kernel: e820: reserve RAM buffer [mem 0x9a100018-0x9bffffff] Nov 4 23:45:38.544561 kernel: e820: reserve RAM buffer [mem 0x9a13d018-0x9bffffff] Nov 4 23:45:38.544571 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Nov 4 23:45:38.544589 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Nov 4 23:45:38.544787 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 4 23:45:38.545004 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 4 23:45:38.545194 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 23:45:38.545208 kernel: vgaarb: loaded Nov 4 23:45:38.545219 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 4 23:45:38.545230 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 4 23:45:38.545398 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 23:45:38.545409 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 23:45:38.545421 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 23:45:38.545431 kernel: pnp: PnP ACPI init Nov 4 23:45:38.545654 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 4 23:45:38.545671 kernel: pnp: PnP ACPI: found 6 devices Nov 4 23:45:38.545682 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 23:45:38.545711 kernel: NET: Registered PF_INET protocol family Nov 4 23:45:38.545722 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 23:45:38.545733 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 4 23:45:38.545744 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 23:45:38.545755 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 23:45:38.545766 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 4 23:45:38.545786 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 4 23:45:38.545812 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 23:45:38.545824 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 23:45:38.545837 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 23:45:38.545848 kernel: NET: Registered PF_XDP protocol family Nov 4 23:45:38.546081 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 4 23:45:38.546274 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 4 23:45:38.546475 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 23:45:38.546654 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 23:45:38.546845 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 23:45:38.547041 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 4 23:45:38.547229 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 4 23:45:38.547407 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 4 23:45:38.547421 kernel: PCI: CLS 0 bytes, default 64 Nov 4 23:45:38.547444 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 4 23:45:38.547455 kernel: Initialise system trusted keyrings Nov 4 23:45:38.547466 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 4 23:45:38.547477 kernel: Key type asymmetric registered Nov 4 23:45:38.547488 kernel: Asymmetric key parser 'x509' registered Nov 4 23:45:38.547547 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 23:45:38.547565 kernel: io scheduler mq-deadline registered Nov 4 23:45:38.547584 kernel: io scheduler kyber registered Nov 4 23:45:38.547595 kernel: io scheduler bfq registered Nov 4 23:45:38.547606 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 23:45:38.547618 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 4 23:45:38.547629 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 4 23:45:38.547640 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 4 23:45:38.547651 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 23:45:38.547670 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 23:45:38.547682 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 23:45:38.547693 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 23:45:38.547704 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 23:45:38.547936 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 4 23:45:38.547953 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 23:45:38.548151 kernel: rtc_cmos 00:04: registered as rtc0 Nov 4 23:45:38.548338 kernel: rtc_cmos 00:04: setting system clock to 2025-11-04T23:45:36 UTC (1762299936) Nov 4 23:45:38.548522 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 4 23:45:38.548548 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 4 23:45:38.548559 kernel: efifb: probing for efifb Nov 4 23:45:38.548571 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 4 23:45:38.548582 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 4 23:45:38.548600 kernel: efifb: scrolling: redraw Nov 4 23:45:38.548611 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 4 23:45:38.548623 kernel: Console: switching to colour frame buffer device 160x50 Nov 4 23:45:38.548641 kernel: fb0: EFI VGA frame buffer device Nov 4 23:45:38.548653 kernel: pstore: Using crash dump compression: deflate Nov 4 23:45:38.548671 kernel: pstore: Registered efi_pstore as persistent store backend Nov 4 23:45:38.548682 kernel: NET: Registered PF_INET6 protocol family Nov 4 23:45:38.548694 kernel: Segment Routing with IPv6 Nov 4 23:45:38.548705 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 23:45:38.548716 kernel: NET: Registered PF_PACKET protocol family Nov 4 23:45:38.548727 kernel: Key type dns_resolver registered Nov 4 23:45:38.548738 kernel: IPI shorthand broadcast: enabled Nov 4 23:45:38.548757 kernel: sched_clock: Marking stable (1503003722, 273888922)->(1836754745, -59862101) Nov 4 23:45:38.548768 kernel: registered taskstats version 1 Nov 4 23:45:38.548780 kernel: Loading compiled-in X.509 certificates Nov 4 23:45:38.548803 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ace064fb6689a15889f35c6439909c760a72ef44' Nov 4 23:45:38.548815 kernel: Demotion targets for Node 0: null Nov 4 23:45:38.548826 kernel: Key type .fscrypt registered Nov 4 23:45:38.548837 kernel: Key type fscrypt-provisioning registered Nov 4 23:45:38.548856 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 23:45:38.548868 kernel: ima: Allocated hash algorithm: sha1 Nov 4 23:45:38.548879 kernel: ima: No architecture policies found Nov 4 23:45:38.548890 kernel: clk: Disabling unused clocks Nov 4 23:45:38.548918 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 4 23:45:38.548930 kernel: Write protecting the kernel read-only data: 40960k Nov 4 23:45:38.548941 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 4 23:45:38.548960 kernel: Run /init as init process Nov 4 23:45:38.548971 kernel: with arguments: Nov 4 23:45:38.548982 kernel: /init Nov 4 23:45:38.548994 kernel: with environment: Nov 4 23:45:38.549005 kernel: HOME=/ Nov 4 23:45:38.549016 kernel: TERM=linux Nov 4 23:45:38.549027 kernel: SCSI subsystem initialized Nov 4 23:45:38.549045 kernel: libata version 3.00 loaded. Nov 4 23:45:38.549252 kernel: ahci 0000:00:1f.2: version 3.0 Nov 4 23:45:38.549267 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 4 23:45:38.549466 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 4 23:45:38.549661 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 4 23:45:38.550007 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 4 23:45:38.550272 kernel: scsi host0: ahci Nov 4 23:45:38.550511 kernel: scsi host1: ahci Nov 4 23:45:38.550730 kernel: scsi host2: ahci Nov 4 23:45:38.550971 kernel: scsi host3: ahci Nov 4 23:45:38.551191 kernel: scsi host4: ahci Nov 4 23:45:38.551582 kernel: scsi host5: ahci Nov 4 23:45:38.551608 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 4 23:45:38.551620 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 4 23:45:38.551632 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 4 23:45:38.551643 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 4 23:45:38.551654 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 4 23:45:38.551666 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 4 23:45:38.551684 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 4 23:45:38.551695 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 4 23:45:38.551706 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 4 23:45:38.551718 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 4 23:45:38.551729 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 4 23:45:38.551741 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 23:45:38.551752 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 4 23:45:38.551770 kernel: ata3.00: applying bridge limits Nov 4 23:45:38.551781 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 23:45:38.551801 kernel: ata3.00: configured for UDMA/100 Nov 4 23:45:38.551826 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 4 23:45:38.552111 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 4 23:45:38.552334 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 4 23:45:38.552533 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 4 23:45:38.552561 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 23:45:38.552573 kernel: GPT:16515071 != 27000831 Nov 4 23:45:38.552584 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 23:45:38.552595 kernel: GPT:16515071 != 27000831 Nov 4 23:45:38.552606 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 23:45:38.552617 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 4 23:45:38.552868 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 4 23:45:38.552885 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 4 23:45:38.553119 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 4 23:45:38.553134 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 23:45:38.553145 kernel: device-mapper: uevent: version 1.0.3 Nov 4 23:45:38.553157 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 23:45:38.553168 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 23:45:38.553199 kernel: raid6: avx2x4 gen() 29277 MB/s Nov 4 23:45:38.553210 kernel: raid6: avx2x2 gen() 29229 MB/s Nov 4 23:45:38.553221 kernel: raid6: avx2x1 gen() 20200 MB/s Nov 4 23:45:38.553232 kernel: raid6: using algorithm avx2x4 gen() 29277 MB/s Nov 4 23:45:38.553243 kernel: raid6: .... xor() 6572 MB/s, rmw enabled Nov 4 23:45:38.553255 kernel: raid6: using avx2x2 recovery algorithm Nov 4 23:45:38.553267 kernel: xor: automatically using best checksumming function avx Nov 4 23:45:38.553286 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 23:45:38.553297 kernel: BTRFS: device fsid f719dc90-1cf7-4f08-a80f-0dda441372cc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (181) Nov 4 23:45:38.553309 kernel: BTRFS info (device dm-0): first mount of filesystem f719dc90-1cf7-4f08-a80f-0dda441372cc Nov 4 23:45:38.553320 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:45:38.553331 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 23:45:38.553343 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 23:45:38.553354 kernel: loop: module loaded Nov 4 23:45:38.553372 kernel: loop0: detected capacity change from 0 to 100120 Nov 4 23:45:38.553383 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 23:45:38.553396 systemd[1]: Successfully made /usr/ read-only. Nov 4 23:45:38.553465 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:45:38.553478 systemd[1]: Detected virtualization kvm. Nov 4 23:45:38.553489 systemd[1]: Detected architecture x86-64. Nov 4 23:45:38.553510 systemd[1]: Running in initrd. Nov 4 23:45:38.553522 systemd[1]: No hostname configured, using default hostname. Nov 4 23:45:38.553534 systemd[1]: Hostname set to . Nov 4 23:45:38.553546 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:45:38.553558 systemd[1]: Queued start job for default target initrd.target. Nov 4 23:45:38.553570 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:45:38.553582 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:45:38.553601 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:45:38.553614 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 23:45:38.553626 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:45:38.553639 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 23:45:38.553652 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 23:45:38.553671 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:45:38.553682 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:45:38.553694 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:45:38.553706 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:45:38.553718 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:45:38.553730 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:45:38.553741 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:45:38.553779 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:45:38.553802 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:45:38.553828 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 23:45:38.553851 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 23:45:38.553864 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:45:38.553885 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:45:38.553896 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:45:38.553934 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:45:38.553946 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 23:45:38.553958 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 23:45:38.553970 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:45:38.553982 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 23:45:38.553995 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 23:45:38.554015 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 23:45:38.554027 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:45:38.554046 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:45:38.554058 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:45:38.554070 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 23:45:38.554089 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:45:38.554101 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 23:45:38.554114 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:45:38.554166 systemd-journald[316]: Collecting audit messages is disabled. Nov 4 23:45:38.554222 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:45:38.554239 systemd-journald[316]: Journal started Nov 4 23:45:38.554263 systemd-journald[316]: Runtime Journal (/run/log/journal/54d072780a06454c9a9fb037551643f9) is 5.9M, max 47.9M, 41.9M free. Nov 4 23:45:38.559070 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:45:38.565168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:45:38.573330 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:45:38.578422 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 23:45:38.583887 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:45:38.589584 kernel: Bridge firewalling registered Nov 4 23:45:38.588973 systemd-modules-load[319]: Inserted module 'br_netfilter' Nov 4 23:45:38.591015 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 23:45:38.595399 systemd-tmpfiles[334]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 23:45:38.599801 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:45:38.614289 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:45:38.618426 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:45:38.624463 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:45:38.636051 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:45:38.642369 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 23:45:38.647329 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:45:38.650848 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:45:38.686053 dracut-cmdline[356]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:45:38.734835 systemd-resolved[359]: Positive Trust Anchors: Nov 4 23:45:38.734853 systemd-resolved[359]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:45:38.734859 systemd-resolved[359]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:45:38.734921 systemd-resolved[359]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:45:38.833583 systemd-resolved[359]: Defaulting to hostname 'linux'. Nov 4 23:45:38.835106 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:45:38.836540 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:45:38.908954 kernel: Loading iSCSI transport class v2.0-870. Nov 4 23:45:38.922950 kernel: iscsi: registered transport (tcp) Nov 4 23:45:38.968238 kernel: iscsi: registered transport (qla4xxx) Nov 4 23:45:38.968277 kernel: QLogic iSCSI HBA Driver Nov 4 23:45:38.997886 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:45:39.018029 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:45:39.019119 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:45:39.190736 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 23:45:39.193446 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 23:45:39.197788 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 23:45:39.246408 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:45:39.250111 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:45:39.284057 systemd-udevd[598]: Using default interface naming scheme 'v257'. Nov 4 23:45:39.300249 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:45:39.302968 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 23:45:39.334361 dracut-pre-trigger[656]: rd.md=0: removing MD RAID activation Nov 4 23:45:39.342825 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:45:39.345509 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:45:39.379414 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:45:39.382023 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:45:39.405498 systemd-networkd[710]: lo: Link UP Nov 4 23:45:39.405507 systemd-networkd[710]: lo: Gained carrier Nov 4 23:45:39.406231 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:45:39.406744 systemd[1]: Reached target network.target - Network. Nov 4 23:45:39.482090 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:45:39.484153 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 23:45:39.555451 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 4 23:45:39.569004 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 4 23:45:39.599932 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 23:45:39.603305 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 23:45:39.620070 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 4 23:45:39.629407 kernel: AES CTR mode by8 optimization enabled Nov 4 23:45:39.641649 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:45:39.642711 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:45:39.645592 systemd-networkd[710]: eth0: Link UP Nov 4 23:45:39.645830 systemd-networkd[710]: eth0: Gained carrier Nov 4 23:45:39.645843 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:45:39.676124 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 23:45:39.685692 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:45:39.685978 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:45:39.695199 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 23:45:39.695199 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:45:39.695226 systemd-networkd[710]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 23:45:39.699891 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:45:39.718290 disk-uuid[839]: Primary Header is updated. Nov 4 23:45:39.718290 disk-uuid[839]: Secondary Entries is updated. Nov 4 23:45:39.718290 disk-uuid[839]: Secondary Header is updated. Nov 4 23:45:39.735770 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:45:39.776117 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 23:45:39.780991 systemd-resolved[359]: Detected conflict on linux IN A 10.0.0.25 Nov 4 23:45:39.781003 systemd-resolved[359]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Nov 4 23:45:39.783089 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:45:39.787786 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:45:39.797008 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:45:39.802040 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 23:45:39.829944 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:45:40.783469 disk-uuid[842]: Warning: The kernel is still using the old partition table. Nov 4 23:45:40.783469 disk-uuid[842]: The new table will be used at the next reboot or after you Nov 4 23:45:40.783469 disk-uuid[842]: run partprobe(8) or kpartx(8) Nov 4 23:45:40.783469 disk-uuid[842]: The operation has completed successfully. Nov 4 23:45:40.795781 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 23:45:40.795951 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 23:45:40.801253 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 23:45:40.842298 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Nov 4 23:45:40.842362 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:45:40.842379 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:45:40.848669 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:45:40.848714 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:45:40.858960 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:45:40.860297 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 23:45:40.866364 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 23:45:41.081463 ignition[886]: Ignition 2.22.0 Nov 4 23:45:41.081481 ignition[886]: Stage: fetch-offline Nov 4 23:45:41.081583 ignition[886]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:45:41.081600 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:45:41.081768 ignition[886]: parsed url from cmdline: "" Nov 4 23:45:41.081772 ignition[886]: no config URL provided Nov 4 23:45:41.081779 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:45:41.081792 ignition[886]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:45:41.081838 ignition[886]: op(1): [started] loading QEMU firmware config module Nov 4 23:45:41.081844 ignition[886]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 4 23:45:41.104383 ignition[886]: op(1): [finished] loading QEMU firmware config module Nov 4 23:45:41.196487 ignition[886]: parsing config with SHA512: 57cf77730a337986c890cf3073e69cd47bb128dd6b537411c2d8c5efb472cdd880785c96cdc99532b6dbb1c620539f51b322b39e221ea7a3467f5503253fbfc6 Nov 4 23:45:41.203437 unknown[886]: fetched base config from "system" Nov 4 23:45:41.203454 unknown[886]: fetched user config from "qemu" Nov 4 23:45:41.204435 ignition[886]: fetch-offline: fetch-offline passed Nov 4 23:45:41.204614 ignition[886]: Ignition finished successfully Nov 4 23:45:41.209512 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:45:41.214242 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 4 23:45:41.218535 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 23:45:41.274967 ignition[896]: Ignition 2.22.0 Nov 4 23:45:41.274982 ignition[896]: Stage: kargs Nov 4 23:45:41.275187 ignition[896]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:45:41.275199 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:45:41.277150 ignition[896]: kargs: kargs passed Nov 4 23:45:41.277245 ignition[896]: Ignition finished successfully Nov 4 23:45:41.283225 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 23:45:41.288217 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 23:45:41.322023 systemd-networkd[710]: eth0: Gained IPv6LL Nov 4 23:45:41.354165 ignition[904]: Ignition 2.22.0 Nov 4 23:45:41.354180 ignition[904]: Stage: disks Nov 4 23:45:41.354431 ignition[904]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:45:41.354465 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:45:41.355392 ignition[904]: disks: disks passed Nov 4 23:45:41.355450 ignition[904]: Ignition finished successfully Nov 4 23:45:41.382123 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 23:45:41.383241 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 23:45:41.383815 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 23:45:41.384687 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:45:41.393451 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:45:41.396279 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:45:41.401751 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 23:45:41.457349 systemd-fsck[914]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 4 23:45:41.524516 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 23:45:41.530997 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 23:45:41.695931 kernel: EXT4-fs (vda9): mounted filesystem cfb29ed0-6faf-41a8-b421-3abc514e4975 r/w with ordered data mode. Quota mode: none. Nov 4 23:45:41.696332 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 23:45:41.699615 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 23:45:41.704795 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:45:41.708574 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 23:45:41.712210 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 23:45:41.712266 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 23:45:41.712299 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:45:41.726190 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 23:45:41.729157 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 23:45:41.733894 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (922) Nov 4 23:45:41.733968 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:45:41.737371 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:45:41.742211 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:45:41.742347 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:45:41.743768 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:45:41.796396 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 23:45:41.803095 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Nov 4 23:45:41.809566 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 23:45:41.815470 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 23:45:41.932189 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 23:45:41.935731 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 23:45:41.938204 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 23:45:41.958942 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 23:45:41.960956 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:45:41.978134 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 23:45:42.026065 ignition[1037]: INFO : Ignition 2.22.0 Nov 4 23:45:42.026065 ignition[1037]: INFO : Stage: mount Nov 4 23:45:42.028917 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:45:42.028917 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:45:42.028917 ignition[1037]: INFO : mount: mount passed Nov 4 23:45:42.028917 ignition[1037]: INFO : Ignition finished successfully Nov 4 23:45:42.038633 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 23:45:42.041644 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 23:45:42.068359 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:45:42.105952 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1048) Nov 4 23:45:42.110024 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:45:42.110061 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:45:42.115171 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:45:42.115197 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:45:42.117369 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:45:42.192480 ignition[1065]: INFO : Ignition 2.22.0 Nov 4 23:45:42.192480 ignition[1065]: INFO : Stage: files Nov 4 23:45:42.195448 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:45:42.195448 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:45:42.195448 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Nov 4 23:45:42.195448 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 23:45:42.195448 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 23:45:42.206454 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 23:45:42.206454 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 23:45:42.206454 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 23:45:42.206454 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:45:42.206454 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 23:45:42.199688 unknown[1065]: wrote ssh authorized keys file for user: core Nov 4 23:45:42.254564 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 23:45:42.445647 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:45:42.445647 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 23:45:42.452141 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 23:45:42.452141 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:45:42.452141 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:45:42.452141 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:45:42.465308 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:45:42.465308 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:45:42.465308 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:45:42.465308 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:45:42.465308 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:45:42.465308 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:45:42.465308 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:45:42.465308 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:45:42.465308 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 4 23:45:42.930097 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 23:45:43.997317 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:45:43.997317 ignition[1065]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 4 23:45:44.004013 ignition[1065]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:45:44.009036 ignition[1065]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:45:44.009036 ignition[1065]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 4 23:45:44.009036 ignition[1065]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 4 23:45:44.016937 ignition[1065]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 23:45:44.016937 ignition[1065]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 23:45:44.016937 ignition[1065]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 4 23:45:44.016937 ignition[1065]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 4 23:45:44.057974 ignition[1065]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 23:45:44.068632 ignition[1065]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 23:45:44.071574 ignition[1065]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 4 23:45:44.071574 ignition[1065]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 4 23:45:44.071574 ignition[1065]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 23:45:44.071574 ignition[1065]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:45:44.071574 ignition[1065]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:45:44.071574 ignition[1065]: INFO : files: files passed Nov 4 23:45:44.071574 ignition[1065]: INFO : Ignition finished successfully Nov 4 23:45:44.087408 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 23:45:44.092378 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 23:45:44.095211 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 23:45:44.117500 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 23:45:44.117664 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 23:45:44.127328 initrd-setup-root-after-ignition[1096]: grep: /sysroot/oem/oem-release: No such file or directory Nov 4 23:45:44.135336 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:45:44.138200 initrd-setup-root-after-ignition[1098]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:45:44.140874 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:45:44.146741 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:45:44.147779 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 23:45:44.155086 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 23:45:44.361661 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 23:45:44.363635 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 23:45:44.368352 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 23:45:44.372364 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 23:45:44.377059 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 23:45:44.381387 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 23:45:44.416751 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:45:44.419286 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 23:45:44.453195 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:45:44.453590 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:45:44.457838 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:45:44.458869 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 23:45:44.464465 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 23:45:44.464640 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:45:44.470786 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 23:45:44.471734 systemd[1]: Stopped target basic.target - Basic System. Nov 4 23:45:44.477133 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 23:45:44.480492 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:45:44.484387 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 23:45:44.488419 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:45:44.492398 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 23:45:44.496004 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:45:44.503175 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 23:45:44.504013 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 23:45:44.507560 systemd[1]: Stopped target swap.target - Swaps. Nov 4 23:45:44.511473 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 23:45:44.511762 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:45:44.515022 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:45:44.521680 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:45:44.525638 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 23:45:44.525779 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:45:44.526700 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 23:45:44.526901 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 23:45:44.536081 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 23:45:44.538125 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:45:44.542018 systemd[1]: Stopped target paths.target - Path Units. Nov 4 23:45:44.545370 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 23:45:44.548975 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:45:44.549715 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 23:45:44.554791 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 23:45:44.555805 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 23:45:44.555931 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:45:44.563772 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 23:45:44.563869 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:45:44.565010 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 23:45:44.565145 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:45:44.565980 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 23:45:44.566110 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 23:45:44.578743 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 23:45:44.579573 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 23:45:44.579723 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:45:44.581690 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 23:45:44.589534 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 23:45:44.589706 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:45:44.593022 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 23:45:44.593161 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:45:44.598960 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 23:45:44.599113 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:45:44.620079 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 23:45:44.621845 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 23:45:44.631512 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 23:45:44.649900 ignition[1122]: INFO : Ignition 2.22.0 Nov 4 23:45:44.649900 ignition[1122]: INFO : Stage: umount Nov 4 23:45:44.652850 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:45:44.652850 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:45:44.652850 ignition[1122]: INFO : umount: umount passed Nov 4 23:45:44.652850 ignition[1122]: INFO : Ignition finished successfully Nov 4 23:45:44.660734 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 23:45:44.662572 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 23:45:44.667092 systemd[1]: Stopped target network.target - Network. Nov 4 23:45:44.670616 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 23:45:44.670756 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 23:45:44.673794 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 23:45:44.673865 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 23:45:44.674832 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 23:45:44.674890 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 23:45:44.675489 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 23:45:44.675559 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 23:45:44.682449 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 23:45:44.685331 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 23:45:44.699472 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 23:45:44.699693 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 23:45:44.708999 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 23:45:44.709416 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 23:45:44.717058 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 23:45:44.719005 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 23:45:44.719054 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:45:44.724137 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 23:45:44.724757 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 23:45:44.724820 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:45:44.727895 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:45:44.727964 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:45:44.728698 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 23:45:44.728744 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 23:45:44.736317 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:45:44.737207 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 23:45:44.740927 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 23:45:44.743016 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 23:45:44.743198 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 23:45:44.767714 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 23:45:44.767926 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:45:44.768954 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 23:45:44.769005 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 23:45:44.773923 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 23:45:44.773975 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:45:44.777385 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 23:45:44.777457 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:45:44.782943 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 23:45:44.783008 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 23:45:44.784331 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 23:45:44.784397 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:45:44.794045 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 23:45:44.794875 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 23:45:44.794979 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:45:44.801287 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 23:45:44.801419 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:45:44.803679 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:45:44.803757 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:45:44.830993 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 23:45:44.846434 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 23:45:44.852525 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 23:45:44.852718 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 23:45:44.854008 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 23:45:44.859030 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 23:45:44.896902 systemd[1]: Switching root. Nov 4 23:45:44.942495 systemd-journald[316]: Journal stopped Nov 4 23:45:47.010781 systemd-journald[316]: Received SIGTERM from PID 1 (systemd). Nov 4 23:45:47.010889 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 23:45:47.011014 kernel: SELinux: policy capability open_perms=1 Nov 4 23:45:47.011061 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 23:45:47.011090 kernel: SELinux: policy capability always_check_network=0 Nov 4 23:45:47.011118 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 23:45:47.011139 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 23:45:47.011186 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 23:45:47.011216 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 23:45:47.011245 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 23:45:47.011273 kernel: audit: type=1403 audit(1762299945.858:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 23:45:47.011313 systemd[1]: Successfully loaded SELinux policy in 69.420ms. Nov 4 23:45:47.011364 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.219ms. Nov 4 23:45:47.011404 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:45:47.011462 systemd[1]: Detected virtualization kvm. Nov 4 23:45:47.011492 systemd[1]: Detected architecture x86-64. Nov 4 23:45:47.011543 systemd[1]: Detected first boot. Nov 4 23:45:47.011573 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:45:47.011603 zram_generator::config[1168]: No configuration found. Nov 4 23:45:47.011633 kernel: Guest personality initialized and is inactive Nov 4 23:45:47.011664 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 23:45:47.011709 kernel: Initialized host personality Nov 4 23:45:47.011738 kernel: NET: Registered PF_VSOCK protocol family Nov 4 23:45:47.011772 systemd[1]: Populated /etc with preset unit settings. Nov 4 23:45:47.011807 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 23:45:47.011836 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 23:45:47.011868 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 23:45:47.011899 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 23:45:47.011966 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 23:45:47.012000 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 23:45:47.012038 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 23:45:47.012077 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 23:45:47.012109 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 23:45:47.012128 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 23:45:47.012161 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 23:45:47.012181 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:45:47.012200 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:45:47.012218 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 23:45:47.012235 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 23:45:47.012254 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 23:45:47.012272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:45:47.012302 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 23:45:47.012320 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:45:47.012342 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:45:47.012359 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 23:45:47.012377 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 23:45:47.012395 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 23:45:47.012424 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 23:45:47.012452 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:45:47.012474 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:45:47.012493 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:45:47.012510 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:45:47.012528 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 23:45:47.012546 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 23:45:47.012564 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 23:45:47.012601 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:45:47.012624 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:45:47.012656 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:45:47.012678 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 23:45:47.012700 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 23:45:47.012719 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 23:45:47.012736 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 23:45:47.012765 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:45:47.012783 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 23:45:47.012801 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 23:45:47.012819 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 23:45:47.012837 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 23:45:47.012854 systemd[1]: Reached target machines.target - Containers. Nov 4 23:45:47.012872 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 23:45:47.012901 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:45:47.012943 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:45:47.012961 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 23:45:47.012978 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:45:47.012996 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:45:47.013014 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:45:47.013043 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 23:45:47.013062 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:45:47.013080 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 23:45:47.013098 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 23:45:47.013125 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 23:45:47.013142 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 23:45:47.013159 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 23:45:47.013189 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:45:47.013208 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:45:47.013226 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:45:47.013244 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:45:47.013261 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 23:45:47.013279 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 23:45:47.013297 kernel: fuse: init (API version 7.41) Nov 4 23:45:47.013326 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:45:47.013345 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:45:47.013364 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 23:45:47.013381 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 23:45:47.013409 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 23:45:47.013428 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 23:45:47.013454 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 23:45:47.013472 kernel: ACPI: bus type drm_connector registered Nov 4 23:45:47.013516 systemd-journald[1232]: Collecting audit messages is disabled. Nov 4 23:45:47.013549 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 23:45:47.013580 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:45:47.013608 systemd-journald[1232]: Journal started Nov 4 23:45:47.013639 systemd-journald[1232]: Runtime Journal (/run/log/journal/54d072780a06454c9a9fb037551643f9) is 5.9M, max 47.9M, 41.9M free. Nov 4 23:45:46.641007 systemd[1]: Queued start job for default target multi-user.target. Nov 4 23:45:46.662553 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 4 23:45:46.663190 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 23:45:47.019943 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:45:47.022375 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 23:45:47.022869 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 23:45:47.025515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:45:47.025835 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:45:47.028245 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:45:47.028578 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:45:47.030867 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:45:47.031101 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:45:47.033831 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 23:45:47.036316 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 23:45:47.036541 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 23:45:47.038813 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:45:47.039043 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:45:47.041397 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:45:47.044026 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:45:47.047469 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 23:45:47.050074 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 23:45:47.065398 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:45:47.067872 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 23:45:47.071526 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 23:45:47.074602 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 23:45:47.076594 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 23:45:47.077134 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:45:47.079494 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 23:45:47.081854 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:45:47.088032 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 23:45:47.091935 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 23:45:47.093835 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:45:47.094967 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 23:45:47.097056 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:45:47.099120 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:45:47.103094 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 23:45:47.109860 systemd-journald[1232]: Time spent on flushing to /var/log/journal/54d072780a06454c9a9fb037551643f9 is 15.335ms for 1022 entries. Nov 4 23:45:47.109860 systemd-journald[1232]: System Journal (/var/log/journal/54d072780a06454c9a9fb037551643f9) is 8M, max 163.5M, 155.5M free. Nov 4 23:45:47.145573 systemd-journald[1232]: Received client request to flush runtime journal. Nov 4 23:45:47.145611 kernel: loop1: detected capacity change from 0 to 110984 Nov 4 23:45:47.111148 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 23:45:47.115713 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 23:45:47.118278 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 23:45:47.122517 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 23:45:47.126094 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:45:47.130360 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 23:45:47.136162 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 23:45:47.142264 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:45:47.151218 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 23:45:47.176956 kernel: loop2: detected capacity change from 0 to 128048 Nov 4 23:45:47.198726 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 23:45:47.205232 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:45:47.210985 kernel: loop3: detected capacity change from 0 to 229808 Nov 4 23:45:47.211314 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:45:47.214552 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 23:45:47.225084 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 23:45:47.237042 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Nov 4 23:45:47.237390 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Nov 4 23:45:47.245007 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:45:47.251939 kernel: loop4: detected capacity change from 0 to 110984 Nov 4 23:45:47.261928 kernel: loop5: detected capacity change from 0 to 128048 Nov 4 23:45:47.270945 kernel: loop6: detected capacity change from 0 to 229808 Nov 4 23:45:47.277048 (sd-merge)[1310]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 4 23:45:47.280862 (sd-merge)[1310]: Merged extensions into '/usr'. Nov 4 23:45:47.282516 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 23:45:47.290256 systemd[1]: Reload requested from client PID 1287 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 23:45:47.290297 systemd[1]: Reloading... Nov 4 23:45:47.370937 zram_generator::config[1342]: No configuration found. Nov 4 23:45:47.394030 systemd-resolved[1303]: Positive Trust Anchors: Nov 4 23:45:47.394049 systemd-resolved[1303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:45:47.394055 systemd-resolved[1303]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:45:47.394097 systemd-resolved[1303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:45:47.401679 systemd-resolved[1303]: Defaulting to hostname 'linux'. Nov 4 23:45:47.637064 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 23:45:47.637327 systemd[1]: Reloading finished in 346 ms. Nov 4 23:45:47.670373 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:45:47.672866 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 23:45:47.679877 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:45:47.727093 systemd[1]: Starting ensure-sysext.service... Nov 4 23:45:47.729941 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:45:47.753845 systemd[1]: Reload requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Nov 4 23:45:47.753866 systemd[1]: Reloading... Nov 4 23:45:47.764283 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 23:45:47.764330 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 23:45:47.764691 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 23:45:47.765105 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 23:45:47.766685 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 23:45:47.767206 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Nov 4 23:45:47.767331 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Nov 4 23:45:47.773890 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:45:47.773899 systemd-tmpfiles[1380]: Skipping /boot Nov 4 23:45:47.786120 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:45:47.786134 systemd-tmpfiles[1380]: Skipping /boot Nov 4 23:45:47.832948 zram_generator::config[1413]: No configuration found. Nov 4 23:45:48.032260 systemd[1]: Reloading finished in 277 ms. Nov 4 23:45:48.053950 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 23:45:48.109619 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:45:48.123121 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:45:48.126428 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 23:45:48.143538 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 23:45:48.147425 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 23:45:48.154243 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:45:48.158210 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 23:45:48.164920 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:45:48.165119 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:45:48.167262 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:45:48.173253 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:45:48.178264 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:45:48.180259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:45:48.180467 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:45:48.180604 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:45:48.189257 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:45:48.190255 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:45:48.190601 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:45:48.190825 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:45:48.191108 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:45:48.196250 systemd-udevd[1454]: Using default interface naming scheme 'v257'. Nov 4 23:45:48.202813 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 23:45:48.234713 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:45:48.237454 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:45:48.240146 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:45:48.243276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:45:48.249613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:45:48.249869 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:45:48.261431 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 23:45:48.270677 systemd[1]: Finished ensure-sysext.service. Nov 4 23:45:48.277384 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:45:48.277630 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:45:48.279477 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:45:48.282374 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:45:48.282440 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:45:48.282503 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:45:48.282577 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:45:48.284943 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 23:45:48.285682 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:45:48.298741 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:45:48.299082 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:45:48.321366 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:45:48.327248 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:45:48.366444 augenrules[1509]: No rules Nov 4 23:45:48.370781 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:45:48.371958 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:45:48.375278 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 23:45:48.378255 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 23:45:48.381827 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 23:45:48.436976 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 23:45:48.475971 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 23:45:48.521065 systemd-networkd[1490]: lo: Link UP Nov 4 23:45:48.521081 systemd-networkd[1490]: lo: Gained carrier Nov 4 23:45:48.523212 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:45:48.523239 systemd-networkd[1490]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:45:48.523245 systemd-networkd[1490]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:45:48.525139 systemd-networkd[1490]: eth0: Link UP Nov 4 23:45:48.525372 systemd-networkd[1490]: eth0: Gained carrier Nov 4 23:45:48.525403 systemd-networkd[1490]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:45:48.526732 systemd[1]: Reached target network.target - Network. Nov 4 23:45:48.541943 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 23:45:48.545833 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 23:45:48.552989 systemd-networkd[1490]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 23:45:48.555578 systemd-timesyncd[1481]: Network configuration changed, trying to establish connection. Nov 4 23:45:48.563553 systemd-timesyncd[1481]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 4 23:45:48.563640 systemd-timesyncd[1481]: Initial clock synchronization to Tue 2025-11-04 23:45:48.899492 UTC. Nov 4 23:45:48.593183 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 23:45:48.605836 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 23:45:48.606952 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 23:45:48.611130 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 23:45:48.618048 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 4 23:45:48.618575 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 23:45:48.618595 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 4 23:45:48.621722 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 4 23:45:48.641971 kernel: ACPI: button: Power Button [PWRF] Nov 4 23:45:48.645329 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 23:45:48.769120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:45:48.787800 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:45:48.788162 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:45:48.801186 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:45:48.982008 kernel: kvm_amd: TSC scaling supported Nov 4 23:45:48.982104 kernel: kvm_amd: Nested Virtualization enabled Nov 4 23:45:48.982129 kernel: kvm_amd: Nested Paging enabled Nov 4 23:45:48.984076 kernel: kvm_amd: LBR virtualization supported Nov 4 23:45:48.984099 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 4 23:45:48.985717 kernel: kvm_amd: Virtual GIF supported Nov 4 23:45:49.126617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:45:49.155034 kernel: EDAC MC: Ver: 3.0.0 Nov 4 23:45:49.239730 ldconfig[1451]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 23:45:49.248896 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 23:45:49.253274 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 23:45:49.297408 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 23:45:49.299642 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:45:49.301691 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 23:45:49.303872 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 23:45:49.306218 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 23:45:49.308455 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 23:45:49.310589 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 23:45:49.312819 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 23:45:49.315023 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 23:45:49.315062 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:45:49.316624 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:45:49.319443 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 23:45:49.323338 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 23:45:49.328132 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 23:45:49.330677 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 23:45:49.333147 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 23:45:49.339500 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 23:45:49.342148 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 23:45:49.345128 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 23:45:49.348210 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:45:49.350122 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:45:49.351889 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:45:49.351930 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:45:49.353536 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 23:45:49.356752 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 23:45:49.359805 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 23:45:49.362859 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 23:45:49.370197 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 23:45:49.372494 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 23:45:49.374479 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 23:45:49.375742 jq[1568]: false Nov 4 23:45:49.377884 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 23:45:49.381141 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 23:45:49.388147 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 23:45:49.393293 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 23:45:49.394694 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Refreshing passwd entry cache Nov 4 23:45:49.394701 oslogin_cache_refresh[1570]: Refreshing passwd entry cache Nov 4 23:45:49.395381 extend-filesystems[1569]: Found /dev/vda6 Nov 4 23:45:49.405736 extend-filesystems[1569]: Found /dev/vda9 Nov 4 23:45:49.404356 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 23:45:49.409190 oslogin_cache_refresh[1570]: Failure getting users, quitting Nov 4 23:45:49.410998 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Failure getting users, quitting Nov 4 23:45:49.410998 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:45:49.410998 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Refreshing group entry cache Nov 4 23:45:49.406216 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 23:45:49.409222 oslogin_cache_refresh[1570]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:45:49.406978 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 23:45:49.409310 oslogin_cache_refresh[1570]: Refreshing group entry cache Nov 4 23:45:49.413034 extend-filesystems[1569]: Checking size of /dev/vda9 Nov 4 23:45:49.408681 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 23:45:49.415109 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 23:45:49.419582 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Failure getting groups, quitting Nov 4 23:45:49.419582 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:45:49.417599 oslogin_cache_refresh[1570]: Failure getting groups, quitting Nov 4 23:45:49.417622 oslogin_cache_refresh[1570]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:45:49.426336 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 23:45:49.429457 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 23:45:49.431196 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 23:45:49.431616 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 23:45:49.431913 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 23:45:49.434597 extend-filesystems[1569]: Resized partition /dev/vda9 Nov 4 23:45:49.444242 extend-filesystems[1597]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 23:45:49.465952 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 4 23:45:49.466131 jq[1584]: true Nov 4 23:45:49.466638 update_engine[1581]: I20251104 23:45:49.456688 1581 main.cc:92] Flatcar Update Engine starting Nov 4 23:45:49.455098 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 23:45:49.455760 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 23:45:49.459769 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 23:45:49.462323 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 23:45:49.497008 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 4 23:45:49.504359 jq[1599]: true Nov 4 23:45:49.521842 (ntainerd)[1601]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 4 23:45:49.529274 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 23:45:49.546288 extend-filesystems[1597]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 4 23:45:49.546288 extend-filesystems[1597]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 4 23:45:49.546288 extend-filesystems[1597]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 4 23:45:49.560251 tar[1596]: linux-amd64/LICENSE Nov 4 23:45:49.560251 tar[1596]: linux-amd64/helm Nov 4 23:45:49.529579 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 23:45:49.561046 extend-filesystems[1569]: Resized filesystem in /dev/vda9 Nov 4 23:45:49.571056 dbus-daemon[1566]: [system] SELinux support is enabled Nov 4 23:45:49.571889 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 23:45:49.578227 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 23:45:49.578260 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 23:45:49.580500 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 23:45:49.580519 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 23:45:49.586704 systemd[1]: Started update-engine.service - Update Engine. Nov 4 23:45:49.588914 update_engine[1581]: I20251104 23:45:49.588630 1581 update_check_scheduler.cc:74] Next update check in 8m37s Nov 4 23:45:49.589031 bash[1636]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:45:49.590542 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 23:45:49.599291 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 4 23:45:49.604217 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 23:45:49.619130 systemd-logind[1579]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 23:45:49.619188 systemd-logind[1579]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 23:45:49.619684 systemd-logind[1579]: New seat seat0. Nov 4 23:45:49.620846 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 23:45:49.686130 sshd_keygen[1590]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 23:45:49.706739 locksmithd[1638]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 23:45:49.736174 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 23:45:49.743135 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 23:45:49.788815 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 23:45:49.789526 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 23:45:49.797016 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 23:45:49.836272 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 23:45:49.842442 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 23:45:49.847342 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 23:45:49.859549 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 23:45:50.146613 containerd[1601]: time="2025-11-04T23:45:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 23:45:50.147639 containerd[1601]: time="2025-11-04T23:45:50.147575336Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 4 23:45:50.161348 containerd[1601]: time="2025-11-04T23:45:50.161233024Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.781µs" Nov 4 23:45:50.161348 containerd[1601]: time="2025-11-04T23:45:50.161298546Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 23:45:50.161348 containerd[1601]: time="2025-11-04T23:45:50.161336417Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 23:45:50.161709 containerd[1601]: time="2025-11-04T23:45:50.161671794Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 23:45:50.161709 containerd[1601]: time="2025-11-04T23:45:50.161701418Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 23:45:50.161762 containerd[1601]: time="2025-11-04T23:45:50.161739061Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:45:50.161853 containerd[1601]: time="2025-11-04T23:45:50.161820214Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:45:50.161853 containerd[1601]: time="2025-11-04T23:45:50.161838226Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:45:50.162239 containerd[1601]: time="2025-11-04T23:45:50.162206716Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:45:50.162239 containerd[1601]: time="2025-11-04T23:45:50.162228664Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:45:50.162285 containerd[1601]: time="2025-11-04T23:45:50.162240484Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:45:50.162285 containerd[1601]: time="2025-11-04T23:45:50.162249168Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 23:45:50.162421 containerd[1601]: time="2025-11-04T23:45:50.162389799Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 23:45:50.162789 containerd[1601]: time="2025-11-04T23:45:50.162755432Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:45:50.162814 containerd[1601]: time="2025-11-04T23:45:50.162798965Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:45:50.162814 containerd[1601]: time="2025-11-04T23:45:50.162810400Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 23:45:50.162896 containerd[1601]: time="2025-11-04T23:45:50.162866969Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 23:45:50.163214 containerd[1601]: time="2025-11-04T23:45:50.163189039Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 23:45:50.163292 containerd[1601]: time="2025-11-04T23:45:50.163272135Z" level=info msg="metadata content store policy set" policy=shared Nov 4 23:45:50.195306 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 23:45:50.199871 systemd[1]: Started sshd@0-10.0.0.25:22-10.0.0.1:55264.service - OpenSSH per-connection server daemon (10.0.0.1:55264). Nov 4 23:45:50.225344 tar[1596]: linux-amd64/README.md Nov 4 23:45:50.254851 containerd[1601]: time="2025-11-04T23:45:50.254759563Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 23:45:50.255032 containerd[1601]: time="2025-11-04T23:45:50.254865220Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 23:45:50.255032 containerd[1601]: time="2025-11-04T23:45:50.254892081Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 23:45:50.255032 containerd[1601]: time="2025-11-04T23:45:50.254905013Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 23:45:50.255032 containerd[1601]: time="2025-11-04T23:45:50.254917301Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 23:45:50.255032 containerd[1601]: time="2025-11-04T23:45:50.254931084Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 23:45:50.255032 containerd[1601]: time="2025-11-04T23:45:50.254969911Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 23:45:50.255032 containerd[1601]: time="2025-11-04T23:45:50.254985191Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 23:45:50.255032 containerd[1601]: time="2025-11-04T23:45:50.254995640Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 23:45:50.255032 containerd[1601]: time="2025-11-04T23:45:50.255008520Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 23:45:50.255032 containerd[1601]: time="2025-11-04T23:45:50.255018833Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 23:45:50.255032 containerd[1601]: time="2025-11-04T23:45:50.255033314Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 23:45:50.255364 containerd[1601]: time="2025-11-04T23:45:50.255295845Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 23:45:50.255364 containerd[1601]: time="2025-11-04T23:45:50.255323725Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 23:45:50.255364 containerd[1601]: time="2025-11-04T23:45:50.255339450Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 23:45:50.255432 containerd[1601]: time="2025-11-04T23:45:50.255376117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 23:45:50.255432 containerd[1601]: time="2025-11-04T23:45:50.255415244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 23:45:50.255469 containerd[1601]: time="2025-11-04T23:45:50.255433786Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 23:45:50.255469 containerd[1601]: time="2025-11-04T23:45:50.255446810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 23:45:50.255469 containerd[1601]: time="2025-11-04T23:45:50.255459421Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 23:45:50.255538 containerd[1601]: time="2025-11-04T23:45:50.255472987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 23:45:50.255538 containerd[1601]: time="2025-11-04T23:45:50.255484349Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 23:45:50.255538 containerd[1601]: time="2025-11-04T23:45:50.255510110Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 23:45:50.255678 containerd[1601]: time="2025-11-04T23:45:50.255645505Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 23:45:50.255705 containerd[1601]: time="2025-11-04T23:45:50.255678650Z" level=info msg="Start snapshots syncer" Nov 4 23:45:50.255749 containerd[1601]: time="2025-11-04T23:45:50.255719356Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 23:45:50.257423 containerd[1601]: time="2025-11-04T23:45:50.257205555Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 23:45:50.257886 containerd[1601]: time="2025-11-04T23:45:50.257454490Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 23:45:50.257886 containerd[1601]: time="2025-11-04T23:45:50.257602059Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 23:45:50.257886 containerd[1601]: time="2025-11-04T23:45:50.257833056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 23:45:50.257886 containerd[1601]: time="2025-11-04T23:45:50.257861060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 23:45:50.257886 containerd[1601]: time="2025-11-04T23:45:50.257880193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 23:45:50.258109 containerd[1601]: time="2025-11-04T23:45:50.257896833Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 23:45:50.258109 containerd[1601]: time="2025-11-04T23:45:50.257912694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 23:45:50.258109 containerd[1601]: time="2025-11-04T23:45:50.257925803Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 23:45:50.258109 containerd[1601]: time="2025-11-04T23:45:50.257956309Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 23:45:50.258109 containerd[1601]: time="2025-11-04T23:45:50.258000340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 23:45:50.258109 containerd[1601]: time="2025-11-04T23:45:50.258017208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 23:45:50.258109 containerd[1601]: time="2025-11-04T23:45:50.258056336Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 23:45:50.258109 containerd[1601]: time="2025-11-04T23:45:50.258111482Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:45:50.258377 containerd[1601]: time="2025-11-04T23:45:50.258136524Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:45:50.258377 containerd[1601]: time="2025-11-04T23:45:50.258180939Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:45:50.258377 containerd[1601]: time="2025-11-04T23:45:50.258208486Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:45:50.258377 containerd[1601]: time="2025-11-04T23:45:50.258218188Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 23:45:50.258377 containerd[1601]: time="2025-11-04T23:45:50.258250315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 23:45:50.258377 containerd[1601]: time="2025-11-04T23:45:50.258290481Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 23:45:50.258377 containerd[1601]: time="2025-11-04T23:45:50.258321891Z" level=info msg="runtime interface created" Nov 4 23:45:50.258377 containerd[1601]: time="2025-11-04T23:45:50.258334044Z" level=info msg="created NRI interface" Nov 4 23:45:50.258377 containerd[1601]: time="2025-11-04T23:45:50.258345096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 23:45:50.258377 containerd[1601]: time="2025-11-04T23:45:50.258363782Z" level=info msg="Connect containerd service" Nov 4 23:45:50.258572 containerd[1601]: time="2025-11-04T23:45:50.258389926Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 23:45:50.259402 containerd[1601]: time="2025-11-04T23:45:50.259367150Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:45:50.347193 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 23:45:50.398581 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 55264 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:45:50.401358 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:45:50.410481 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 23:45:50.414612 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 23:45:50.429153 systemd-logind[1579]: New session 1 of user core. Nov 4 23:45:50.468701 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 23:45:50.474462 systemd-networkd[1490]: eth0: Gained IPv6LL Nov 4 23:45:50.477556 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 23:45:50.480697 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 23:45:50.484958 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 23:45:50.489174 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 4 23:45:50.494465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:45:50.507212 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 23:45:50.522791 (systemd)[1686]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 23:45:50.526734 systemd-logind[1579]: New session c1 of user core. Nov 4 23:45:50.548708 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 4 23:45:50.549123 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 4 23:45:50.551619 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 23:45:50.555563 containerd[1601]: time="2025-11-04T23:45:50.555489815Z" level=info msg="Start subscribing containerd event" Nov 4 23:45:50.555675 containerd[1601]: time="2025-11-04T23:45:50.555607832Z" level=info msg="Start recovering state" Nov 4 23:45:50.555833 containerd[1601]: time="2025-11-04T23:45:50.555813111Z" level=info msg="Start event monitor" Nov 4 23:45:50.555865 containerd[1601]: time="2025-11-04T23:45:50.555851554Z" level=info msg="Start cni network conf syncer for default" Nov 4 23:45:50.555885 containerd[1601]: time="2025-11-04T23:45:50.555877781Z" level=info msg="Start streaming server" Nov 4 23:45:50.555905 containerd[1601]: time="2025-11-04T23:45:50.555895501Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 23:45:50.555971 containerd[1601]: time="2025-11-04T23:45:50.555907229Z" level=info msg="runtime interface starting up..." Nov 4 23:45:50.555971 containerd[1601]: time="2025-11-04T23:45:50.555915735Z" level=info msg="starting plugins..." Nov 4 23:45:50.556314 containerd[1601]: time="2025-11-04T23:45:50.556251268Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 23:45:50.556381 containerd[1601]: time="2025-11-04T23:45:50.556353435Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 23:45:50.557921 containerd[1601]: time="2025-11-04T23:45:50.557893531Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 23:45:50.558279 containerd[1601]: time="2025-11-04T23:45:50.558253650Z" level=info msg="containerd successfully booted in 0.412836s" Nov 4 23:45:50.558389 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 23:45:50.562589 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 23:45:50.733039 systemd[1686]: Queued start job for default target default.target. Nov 4 23:45:50.756902 systemd[1686]: Created slice app.slice - User Application Slice. Nov 4 23:45:50.756938 systemd[1686]: Reached target paths.target - Paths. Nov 4 23:45:50.757009 systemd[1686]: Reached target timers.target - Timers. Nov 4 23:45:50.759066 systemd[1686]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 23:45:50.772343 systemd[1686]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 23:45:50.772504 systemd[1686]: Reached target sockets.target - Sockets. Nov 4 23:45:50.772551 systemd[1686]: Reached target basic.target - Basic System. Nov 4 23:45:50.772594 systemd[1686]: Reached target default.target - Main User Target. Nov 4 23:45:50.772629 systemd[1686]: Startup finished in 222ms. Nov 4 23:45:50.773655 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 23:45:50.777794 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 23:45:50.912168 systemd[1]: Started sshd@1-10.0.0.25:22-10.0.0.1:55278.service - OpenSSH per-connection server daemon (10.0.0.1:55278). Nov 4 23:45:51.021305 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 55278 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:45:51.023189 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:45:51.028316 systemd-logind[1579]: New session 2 of user core. Nov 4 23:45:51.039125 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 23:45:51.098051 sshd[1724]: Connection closed by 10.0.0.1 port 55278 Nov 4 23:45:51.098452 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Nov 4 23:45:51.109531 systemd[1]: sshd@1-10.0.0.25:22-10.0.0.1:55278.service: Deactivated successfully. Nov 4 23:45:51.111717 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 23:45:51.112626 systemd-logind[1579]: Session 2 logged out. Waiting for processes to exit. Nov 4 23:45:51.116328 systemd[1]: Started sshd@2-10.0.0.25:22-10.0.0.1:55284.service - OpenSSH per-connection server daemon (10.0.0.1:55284). Nov 4 23:45:51.119410 systemd-logind[1579]: Removed session 2. Nov 4 23:45:51.376714 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 55284 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:45:51.378641 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:45:51.384972 systemd-logind[1579]: New session 3 of user core. Nov 4 23:45:51.405252 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 23:45:51.466017 sshd[1733]: Connection closed by 10.0.0.1 port 55284 Nov 4 23:45:51.466384 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Nov 4 23:45:51.471336 systemd[1]: sshd@2-10.0.0.25:22-10.0.0.1:55284.service: Deactivated successfully. Nov 4 23:45:51.474570 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 23:45:51.475588 systemd-logind[1579]: Session 3 logged out. Waiting for processes to exit. Nov 4 23:45:51.477759 systemd-logind[1579]: Removed session 3. Nov 4 23:45:52.764544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:45:52.767521 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 23:45:52.769926 systemd[1]: Startup finished in 2.993s (kernel) + 7.766s (initrd) + 6.976s (userspace) = 17.737s. Nov 4 23:45:52.778351 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:45:54.063126 kubelet[1743]: E1104 23:45:54.062781 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:45:54.067770 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:45:54.068069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:45:54.068558 systemd[1]: kubelet.service: Consumed 3.189s CPU time, 268.1M memory peak. Nov 4 23:46:01.659716 systemd[1]: Started sshd@3-10.0.0.25:22-10.0.0.1:45494.service - OpenSSH per-connection server daemon (10.0.0.1:45494). Nov 4 23:46:01.758127 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 45494 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:46:01.759867 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:46:01.765328 systemd-logind[1579]: New session 4 of user core. Nov 4 23:46:01.789157 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 23:46:01.845966 sshd[1759]: Connection closed by 10.0.0.1 port 45494 Nov 4 23:46:01.846355 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Nov 4 23:46:01.862518 systemd[1]: sshd@3-10.0.0.25:22-10.0.0.1:45494.service: Deactivated successfully. Nov 4 23:46:01.864845 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 23:46:01.865837 systemd-logind[1579]: Session 4 logged out. Waiting for processes to exit. Nov 4 23:46:01.869347 systemd[1]: Started sshd@4-10.0.0.25:22-10.0.0.1:45508.service - OpenSSH per-connection server daemon (10.0.0.1:45508). Nov 4 23:46:01.869866 systemd-logind[1579]: Removed session 4. Nov 4 23:46:01.933391 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 45508 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:46:01.934964 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:46:01.939728 systemd-logind[1579]: New session 5 of user core. Nov 4 23:46:01.950071 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 23:46:02.001948 sshd[1768]: Connection closed by 10.0.0.1 port 45508 Nov 4 23:46:02.002286 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Nov 4 23:46:02.011731 systemd[1]: sshd@4-10.0.0.25:22-10.0.0.1:45508.service: Deactivated successfully. Nov 4 23:46:02.013733 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 23:46:02.014724 systemd-logind[1579]: Session 5 logged out. Waiting for processes to exit. Nov 4 23:46:02.017566 systemd[1]: Started sshd@5-10.0.0.25:22-10.0.0.1:45548.service - OpenSSH per-connection server daemon (10.0.0.1:45548). Nov 4 23:46:02.018266 systemd-logind[1579]: Removed session 5. Nov 4 23:46:02.075228 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 45548 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:46:02.077164 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:46:02.082503 systemd-logind[1579]: New session 6 of user core. Nov 4 23:46:02.098310 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 23:46:02.155113 sshd[1777]: Connection closed by 10.0.0.1 port 45548 Nov 4 23:46:02.155648 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Nov 4 23:46:02.168685 systemd[1]: sshd@5-10.0.0.25:22-10.0.0.1:45548.service: Deactivated successfully. Nov 4 23:46:02.170455 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 23:46:02.171240 systemd-logind[1579]: Session 6 logged out. Waiting for processes to exit. Nov 4 23:46:02.174118 systemd[1]: Started sshd@6-10.0.0.25:22-10.0.0.1:45562.service - OpenSSH per-connection server daemon (10.0.0.1:45562). Nov 4 23:46:02.174683 systemd-logind[1579]: Removed session 6. Nov 4 23:46:02.235796 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 45562 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:46:02.237395 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:46:02.241849 systemd-logind[1579]: New session 7 of user core. Nov 4 23:46:02.263086 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 23:46:02.328255 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 23:46:02.328573 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:46:02.349433 sudo[1787]: pam_unix(sudo:session): session closed for user root Nov 4 23:46:02.352049 sshd[1786]: Connection closed by 10.0.0.1 port 45562 Nov 4 23:46:02.352489 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Nov 4 23:46:02.364455 systemd[1]: sshd@6-10.0.0.25:22-10.0.0.1:45562.service: Deactivated successfully. Nov 4 23:46:02.367829 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 23:46:02.368836 systemd-logind[1579]: Session 7 logged out. Waiting for processes to exit. Nov 4 23:46:02.372173 systemd[1]: Started sshd@7-10.0.0.25:22-10.0.0.1:45578.service - OpenSSH per-connection server daemon (10.0.0.1:45578). Nov 4 23:46:02.372833 systemd-logind[1579]: Removed session 7. Nov 4 23:46:02.429626 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 45578 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:46:02.431287 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:46:02.436228 systemd-logind[1579]: New session 8 of user core. Nov 4 23:46:02.446065 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 23:46:02.503076 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 23:46:02.503392 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:46:02.640663 sudo[1798]: pam_unix(sudo:session): session closed for user root Nov 4 23:46:02.649831 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 23:46:02.650224 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:46:02.662330 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:46:02.721716 augenrules[1820]: No rules Nov 4 23:46:02.723432 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:46:02.723731 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:46:02.725064 sudo[1797]: pam_unix(sudo:session): session closed for user root Nov 4 23:46:02.727069 sshd[1796]: Connection closed by 10.0.0.1 port 45578 Nov 4 23:46:02.727407 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Nov 4 23:46:02.736494 systemd[1]: sshd@7-10.0.0.25:22-10.0.0.1:45578.service: Deactivated successfully. Nov 4 23:46:02.738456 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 23:46:02.739304 systemd-logind[1579]: Session 8 logged out. Waiting for processes to exit. Nov 4 23:46:02.742391 systemd[1]: Started sshd@8-10.0.0.25:22-10.0.0.1:45584.service - OpenSSH per-connection server daemon (10.0.0.1:45584). Nov 4 23:46:02.743122 systemd-logind[1579]: Removed session 8. Nov 4 23:46:02.804639 sshd[1829]: Accepted publickey for core from 10.0.0.1 port 45584 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:46:02.806490 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:46:02.811475 systemd-logind[1579]: New session 9 of user core. Nov 4 23:46:02.823080 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 23:46:02.879265 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 23:46:02.879582 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:46:04.418532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 23:46:04.421934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:46:04.647113 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 23:46:04.663373 (dockerd)[1856]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 23:46:04.773724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:46:04.794360 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:46:04.886486 kubelet[1862]: E1104 23:46:04.886416 1862 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:46:04.893782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:46:04.894014 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:46:04.894465 systemd[1]: kubelet.service: Consumed 366ms CPU time, 109.2M memory peak. Nov 4 23:46:05.217199 dockerd[1856]: time="2025-11-04T23:46:05.217091942Z" level=info msg="Starting up" Nov 4 23:46:05.218271 dockerd[1856]: time="2025-11-04T23:46:05.218189044Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 23:46:05.245000 dockerd[1856]: time="2025-11-04T23:46:05.244899983Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 23:46:05.639066 dockerd[1856]: time="2025-11-04T23:46:05.638946377Z" level=info msg="Loading containers: start." Nov 4 23:46:05.651976 kernel: Initializing XFRM netlink socket Nov 4 23:46:05.953783 systemd-networkd[1490]: docker0: Link UP Nov 4 23:46:05.958486 dockerd[1856]: time="2025-11-04T23:46:05.958437205Z" level=info msg="Loading containers: done." Nov 4 23:46:05.975514 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1979115333-merged.mount: Deactivated successfully. Nov 4 23:46:05.977214 dockerd[1856]: time="2025-11-04T23:46:05.977166603Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 23:46:05.977288 dockerd[1856]: time="2025-11-04T23:46:05.977267177Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 23:46:05.977382 dockerd[1856]: time="2025-11-04T23:46:05.977362132Z" level=info msg="Initializing buildkit" Nov 4 23:46:06.014041 dockerd[1856]: time="2025-11-04T23:46:06.013979718Z" level=info msg="Completed buildkit initialization" Nov 4 23:46:06.020149 dockerd[1856]: time="2025-11-04T23:46:06.020114379Z" level=info msg="Daemon has completed initialization" Nov 4 23:46:06.020287 dockerd[1856]: time="2025-11-04T23:46:06.020217125Z" level=info msg="API listen on /run/docker.sock" Nov 4 23:46:06.020462 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 23:46:06.943236 containerd[1601]: time="2025-11-04T23:46:06.943136196Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 4 23:46:08.315822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount197507158.mount: Deactivated successfully. Nov 4 23:46:10.295721 containerd[1601]: time="2025-11-04T23:46:10.295492863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:10.296281 containerd[1601]: time="2025-11-04T23:46:10.296098744Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 4 23:46:10.298014 containerd[1601]: time="2025-11-04T23:46:10.297971297Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:10.302296 containerd[1601]: time="2025-11-04T23:46:10.302228019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:10.304265 containerd[1601]: time="2025-11-04T23:46:10.304184834Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 3.360952013s" Nov 4 23:46:10.304324 containerd[1601]: time="2025-11-04T23:46:10.304279089Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 4 23:46:10.306336 containerd[1601]: time="2025-11-04T23:46:10.306122997Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 4 23:46:12.448114 containerd[1601]: time="2025-11-04T23:46:12.448036761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:12.448895 containerd[1601]: time="2025-11-04T23:46:12.448862391Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 4 23:46:12.450193 containerd[1601]: time="2025-11-04T23:46:12.450156317Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:12.453173 containerd[1601]: time="2025-11-04T23:46:12.453126500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:12.454104 containerd[1601]: time="2025-11-04T23:46:12.454038588Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.147876083s" Nov 4 23:46:12.454104 containerd[1601]: time="2025-11-04T23:46:12.454089242Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 4 23:46:12.454732 containerd[1601]: time="2025-11-04T23:46:12.454688207Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 4 23:46:14.106550 containerd[1601]: time="2025-11-04T23:46:14.106491223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:14.107613 containerd[1601]: time="2025-11-04T23:46:14.107538835Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 4 23:46:14.108688 containerd[1601]: time="2025-11-04T23:46:14.108643588Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:14.111532 containerd[1601]: time="2025-11-04T23:46:14.111491420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:14.112665 containerd[1601]: time="2025-11-04T23:46:14.112605625Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.657883232s" Nov 4 23:46:14.112665 containerd[1601]: time="2025-11-04T23:46:14.112638255Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 4 23:46:14.113230 containerd[1601]: time="2025-11-04T23:46:14.113203043Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 4 23:46:15.130054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 4 23:46:15.132143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:46:15.386507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:46:15.402342 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:46:15.753048 kubelet[2169]: E1104 23:46:15.752840 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:46:15.759549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:46:15.759814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:46:15.760398 systemd[1]: kubelet.service: Consumed 568ms CPU time, 111.2M memory peak. Nov 4 23:46:15.811930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1223566651.mount: Deactivated successfully. Nov 4 23:46:17.588319 containerd[1601]: time="2025-11-04T23:46:17.588241333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:17.589712 containerd[1601]: time="2025-11-04T23:46:17.589637483Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 4 23:46:17.591420 containerd[1601]: time="2025-11-04T23:46:17.591358845Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:17.593998 containerd[1601]: time="2025-11-04T23:46:17.593950886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:17.595055 containerd[1601]: time="2025-11-04T23:46:17.594966463Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 3.481710789s" Nov 4 23:46:17.595055 containerd[1601]: time="2025-11-04T23:46:17.595046843Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 4 23:46:17.595683 containerd[1601]: time="2025-11-04T23:46:17.595645873Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 4 23:46:18.733268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1816567676.mount: Deactivated successfully. Nov 4 23:46:21.178187 containerd[1601]: time="2025-11-04T23:46:21.178040031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:21.178714 containerd[1601]: time="2025-11-04T23:46:21.178683900Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 4 23:46:21.179986 containerd[1601]: time="2025-11-04T23:46:21.179939509Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:21.183277 containerd[1601]: time="2025-11-04T23:46:21.183226118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:21.184486 containerd[1601]: time="2025-11-04T23:46:21.184425150Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.588735953s" Nov 4 23:46:21.184486 containerd[1601]: time="2025-11-04T23:46:21.184471669Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 4 23:46:21.185196 containerd[1601]: time="2025-11-04T23:46:21.185155097Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 23:46:22.187816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3583132953.mount: Deactivated successfully. Nov 4 23:46:22.195210 containerd[1601]: time="2025-11-04T23:46:22.195138021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:46:22.195892 containerd[1601]: time="2025-11-04T23:46:22.195834010Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 4 23:46:22.197201 containerd[1601]: time="2025-11-04T23:46:22.197164339Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:46:22.200089 containerd[1601]: time="2025-11-04T23:46:22.200039941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:46:22.200783 containerd[1601]: time="2025-11-04T23:46:22.200733323Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.015546338s" Nov 4 23:46:22.200783 containerd[1601]: time="2025-11-04T23:46:22.200766522Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 4 23:46:22.201584 containerd[1601]: time="2025-11-04T23:46:22.201359070Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 4 23:46:24.481232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618729529.mount: Deactivated successfully. Nov 4 23:46:25.879799 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 4 23:46:25.881655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:46:26.185134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:46:26.190426 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:46:26.265566 kubelet[2285]: E1104 23:46:26.265498 2285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:46:26.271275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:46:26.271483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:46:26.271894 systemd[1]: kubelet.service: Consumed 271ms CPU time, 110.2M memory peak. Nov 4 23:46:30.186373 containerd[1601]: time="2025-11-04T23:46:30.186269835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:30.187738 containerd[1601]: time="2025-11-04T23:46:30.187699627Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 4 23:46:30.189208 containerd[1601]: time="2025-11-04T23:46:30.189116760Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:30.194365 containerd[1601]: time="2025-11-04T23:46:30.193971966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:30.195570 containerd[1601]: time="2025-11-04T23:46:30.195500398Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 7.994093842s" Nov 4 23:46:30.195570 containerd[1601]: time="2025-11-04T23:46:30.195560513Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 4 23:46:34.011803 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:46:34.012070 systemd[1]: kubelet.service: Consumed 271ms CPU time, 110.2M memory peak. Nov 4 23:46:34.015116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:46:34.048175 systemd[1]: Reload requested from client PID 2340 ('systemctl') (unit session-9.scope)... Nov 4 23:46:34.048194 systemd[1]: Reloading... Nov 4 23:46:34.135067 zram_generator::config[2381]: No configuration found. Nov 4 23:46:34.740490 systemd[1]: Reloading finished in 691 ms. Nov 4 23:46:34.821258 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 23:46:34.821391 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 23:46:34.821786 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:46:34.821837 systemd[1]: kubelet.service: Consumed 187ms CPU time, 98.4M memory peak. Nov 4 23:46:34.823595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:46:35.031791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:46:35.038695 update_engine[1581]: I20251104 23:46:35.038604 1581 update_attempter.cc:509] Updating boot flags... Nov 4 23:46:35.040212 (kubelet)[2431]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:46:35.791177 kubelet[2431]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:46:35.791177 kubelet[2431]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:46:35.791177 kubelet[2431]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:46:35.791703 kubelet[2431]: I1104 23:46:35.791214 2431 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:46:36.248065 kubelet[2431]: I1104 23:46:36.248010 2431 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 23:46:36.248065 kubelet[2431]: I1104 23:46:36.248042 2431 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:46:36.248419 kubelet[2431]: I1104 23:46:36.248390 2431 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:46:36.313092 kubelet[2431]: E1104 23:46:36.312993 2431 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 23:46:36.313954 kubelet[2431]: I1104 23:46:36.313891 2431 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:46:36.328967 kubelet[2431]: I1104 23:46:36.327855 2431 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:46:36.335154 kubelet[2431]: I1104 23:46:36.335103 2431 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:46:36.335504 kubelet[2431]: I1104 23:46:36.335459 2431 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:46:36.335744 kubelet[2431]: I1104 23:46:36.335495 2431 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:46:36.336106 kubelet[2431]: I1104 23:46:36.335759 2431 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:46:36.336106 kubelet[2431]: I1104 23:46:36.335773 2431 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 23:46:36.336106 kubelet[2431]: I1104 23:46:36.336007 2431 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:46:36.339797 kubelet[2431]: I1104 23:46:36.339757 2431 kubelet.go:480] "Attempting to sync node with API server" Nov 4 23:46:36.339797 kubelet[2431]: I1104 23:46:36.339789 2431 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:46:36.339876 kubelet[2431]: I1104 23:46:36.339844 2431 kubelet.go:386] "Adding apiserver pod source" Nov 4 23:46:36.339876 kubelet[2431]: I1104 23:46:36.339874 2431 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:46:36.345631 kubelet[2431]: I1104 23:46:36.345017 2431 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:46:36.345631 kubelet[2431]: I1104 23:46:36.345533 2431 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:46:36.345784 kubelet[2431]: E1104 23:46:36.345742 2431 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:46:36.345833 kubelet[2431]: E1104 23:46:36.345775 2431 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:46:36.346850 kubelet[2431]: W1104 23:46:36.346802 2431 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 23:46:36.349843 kubelet[2431]: I1104 23:46:36.349813 2431 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:46:36.349925 kubelet[2431]: I1104 23:46:36.349872 2431 server.go:1289] "Started kubelet" Nov 4 23:46:36.353059 kubelet[2431]: I1104 23:46:36.352991 2431 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:46:36.353340 kubelet[2431]: I1104 23:46:36.353318 2431 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:46:36.353469 kubelet[2431]: I1104 23:46:36.353451 2431 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:46:36.353594 kubelet[2431]: I1104 23:46:36.353576 2431 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:46:36.354570 kubelet[2431]: I1104 23:46:36.354553 2431 server.go:317] "Adding debug handlers to kubelet server" Nov 4 23:46:36.355133 kubelet[2431]: E1104 23:46:36.355102 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:46:36.355133 kubelet[2431]: I1104 23:46:36.355131 2431 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:46:36.355312 kubelet[2431]: I1104 23:46:36.355294 2431 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:46:36.355390 kubelet[2431]: I1104 23:46:36.355377 2431 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:46:36.355747 kubelet[2431]: E1104 23:46:36.355723 2431 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:46:36.355837 kubelet[2431]: I1104 23:46:36.355816 2431 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:46:36.356815 kubelet[2431]: E1104 23:46:36.356773 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="200ms" Nov 4 23:46:36.359611 kubelet[2431]: I1104 23:46:36.359583 2431 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:46:36.359823 kubelet[2431]: I1104 23:46:36.359798 2431 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:46:36.361863 kubelet[2431]: E1104 23:46:36.360149 2431 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.25:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.25:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874f282761abcff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 23:46:36.349840639 +0000 UTC m=+1.304726105,LastTimestamp:2025-11-04 23:46:36.349840639 +0000 UTC m=+1.304726105,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 23:46:36.362172 kubelet[2431]: I1104 23:46:36.362149 2431 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:46:36.363204 kubelet[2431]: E1104 23:46:36.363159 2431 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:46:36.369621 kubelet[2431]: I1104 23:46:36.369537 2431 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 23:46:36.391467 kubelet[2431]: I1104 23:46:36.391427 2431 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:46:36.391467 kubelet[2431]: I1104 23:46:36.391455 2431 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:46:36.391629 kubelet[2431]: I1104 23:46:36.391482 2431 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:46:36.397879 kubelet[2431]: I1104 23:46:36.397831 2431 policy_none.go:49] "None policy: Start" Nov 4 23:46:36.397957 kubelet[2431]: I1104 23:46:36.397885 2431 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:46:36.397957 kubelet[2431]: I1104 23:46:36.397929 2431 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:46:36.402924 kubelet[2431]: I1104 23:46:36.402864 2431 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 23:46:36.402998 kubelet[2431]: I1104 23:46:36.402955 2431 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 23:46:36.403048 kubelet[2431]: I1104 23:46:36.403009 2431 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:46:36.403048 kubelet[2431]: I1104 23:46:36.403027 2431 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 23:46:36.403119 kubelet[2431]: E1104 23:46:36.403097 2431 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:46:36.403864 kubelet[2431]: E1104 23:46:36.403808 2431 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:46:36.404129 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 23:46:36.427581 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 23:46:36.431022 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 23:46:36.449090 kubelet[2431]: E1104 23:46:36.449047 2431 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:46:36.449369 kubelet[2431]: I1104 23:46:36.449279 2431 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:46:36.449369 kubelet[2431]: I1104 23:46:36.449291 2431 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:46:36.449629 kubelet[2431]: I1104 23:46:36.449609 2431 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:46:36.450645 kubelet[2431]: E1104 23:46:36.450610 2431 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:46:36.450857 kubelet[2431]: E1104 23:46:36.450827 2431 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 4 23:46:36.517179 systemd[1]: Created slice kubepods-burstable-podb1ab91c97c8370a1a68455963c4eb533.slice - libcontainer container kubepods-burstable-podb1ab91c97c8370a1a68455963c4eb533.slice. Nov 4 23:46:36.539204 kubelet[2431]: E1104 23:46:36.539147 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:46:36.542476 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 4 23:46:36.550918 kubelet[2431]: I1104 23:46:36.550861 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:46:36.551451 kubelet[2431]: E1104 23:46:36.551393 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Nov 4 23:46:36.555767 kubelet[2431]: I1104 23:46:36.555612 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:46:36.555767 kubelet[2431]: I1104 23:46:36.555669 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:46:36.555767 kubelet[2431]: I1104 23:46:36.555704 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 4 23:46:36.555956 kubelet[2431]: I1104 23:46:36.555779 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1ab91c97c8370a1a68455963c4eb533-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1ab91c97c8370a1a68455963c4eb533\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:46:36.555956 kubelet[2431]: I1104 23:46:36.555870 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:46:36.556027 kubelet[2431]: I1104 23:46:36.555948 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:46:36.556027 kubelet[2431]: I1104 23:46:36.555984 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1ab91c97c8370a1a68455963c4eb533-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1ab91c97c8370a1a68455963c4eb533\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:46:36.556027 kubelet[2431]: I1104 23:46:36.556023 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1ab91c97c8370a1a68455963c4eb533-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b1ab91c97c8370a1a68455963c4eb533\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:46:36.556147 kubelet[2431]: I1104 23:46:36.556049 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:46:36.556706 kubelet[2431]: E1104 23:46:36.556671 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:46:36.557560 kubelet[2431]: E1104 23:46:36.557491 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="400ms" Nov 4 23:46:36.560124 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 4 23:46:36.562869 kubelet[2431]: E1104 23:46:36.562819 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:46:36.754318 kubelet[2431]: I1104 23:46:36.754254 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:46:36.754985 kubelet[2431]: E1104 23:46:36.754889 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Nov 4 23:46:36.840729 kubelet[2431]: E1104 23:46:36.840501 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:36.841934 containerd[1601]: time="2025-11-04T23:46:36.841874799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b1ab91c97c8370a1a68455963c4eb533,Namespace:kube-system,Attempt:0,}" Nov 4 23:46:36.859633 kubelet[2431]: E1104 23:46:36.859490 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:36.861219 containerd[1601]: time="2025-11-04T23:46:36.861175469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 4 23:46:36.863774 kubelet[2431]: E1104 23:46:36.863723 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:36.864944 containerd[1601]: time="2025-11-04T23:46:36.864884310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 4 23:46:36.876071 containerd[1601]: time="2025-11-04T23:46:36.875970445Z" level=info msg="connecting to shim 11eec7a5cd4d3f12dafe4e3847a74f6c1cac8e0f7f0b61795d659338cdf94e2d" address="unix:///run/containerd/s/1ccac02c0b45e581097abbd0da8179702b10d6ae38831765def6ab7cde8f2ee3" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:46:36.959780 kubelet[2431]: E1104 23:46:36.959570 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="800ms" Nov 4 23:46:36.963722 containerd[1601]: time="2025-11-04T23:46:36.963663864Z" level=info msg="connecting to shim 8e2bce08c1c6d17f0bfe029d1adcbf6f2fe7f63672e84a3dc9296ffe9094921e" address="unix:///run/containerd/s/a49359d20f3cfe5bedef142baca94cbc606c14310b5f207988d6038f7f4571a8" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:46:36.983928 containerd[1601]: time="2025-11-04T23:46:36.983854658Z" level=info msg="connecting to shim f757d48ee0c6095f5611f5cfdaf09bb258f41637364748867a3356d7c751c791" address="unix:///run/containerd/s/7db704b8dd7f55c07f37ee26af1351c8b372126169b11c5b557b96368cf6d33e" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:46:36.989229 systemd[1]: Started cri-containerd-11eec7a5cd4d3f12dafe4e3847a74f6c1cac8e0f7f0b61795d659338cdf94e2d.scope - libcontainer container 11eec7a5cd4d3f12dafe4e3847a74f6c1cac8e0f7f0b61795d659338cdf94e2d. Nov 4 23:46:37.020115 systemd[1]: Started cri-containerd-8e2bce08c1c6d17f0bfe029d1adcbf6f2fe7f63672e84a3dc9296ffe9094921e.scope - libcontainer container 8e2bce08c1c6d17f0bfe029d1adcbf6f2fe7f63672e84a3dc9296ffe9094921e. Nov 4 23:46:37.046191 systemd[1]: Started cri-containerd-f757d48ee0c6095f5611f5cfdaf09bb258f41637364748867a3356d7c751c791.scope - libcontainer container f757d48ee0c6095f5611f5cfdaf09bb258f41637364748867a3356d7c751c791. Nov 4 23:46:37.113700 containerd[1601]: time="2025-11-04T23:46:37.113571400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b1ab91c97c8370a1a68455963c4eb533,Namespace:kube-system,Attempt:0,} returns sandbox id \"11eec7a5cd4d3f12dafe4e3847a74f6c1cac8e0f7f0b61795d659338cdf94e2d\"" Nov 4 23:46:37.120925 kubelet[2431]: E1104 23:46:37.120606 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:37.157580 kubelet[2431]: I1104 23:46:37.157542 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:46:37.158159 kubelet[2431]: E1104 23:46:37.158097 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Nov 4 23:46:37.172353 containerd[1601]: time="2025-11-04T23:46:37.172278149Z" level=info msg="CreateContainer within sandbox \"11eec7a5cd4d3f12dafe4e3847a74f6c1cac8e0f7f0b61795d659338cdf94e2d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 23:46:37.172736 containerd[1601]: time="2025-11-04T23:46:37.172684666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e2bce08c1c6d17f0bfe029d1adcbf6f2fe7f63672e84a3dc9296ffe9094921e\"" Nov 4 23:46:37.174291 kubelet[2431]: E1104 23:46:37.174249 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:37.211859 containerd[1601]: time="2025-11-04T23:46:37.211808018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"f757d48ee0c6095f5611f5cfdaf09bb258f41637364748867a3356d7c751c791\"" Nov 4 23:46:37.212733 containerd[1601]: time="2025-11-04T23:46:37.212664770Z" level=info msg="CreateContainer within sandbox \"8e2bce08c1c6d17f0bfe029d1adcbf6f2fe7f63672e84a3dc9296ffe9094921e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 23:46:37.212986 kubelet[2431]: E1104 23:46:37.212952 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:37.298248 kubelet[2431]: E1104 23:46:37.298174 2431 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:46:37.311686 containerd[1601]: time="2025-11-04T23:46:37.311632048Z" level=info msg="CreateContainer within sandbox \"f757d48ee0c6095f5611f5cfdaf09bb258f41637364748867a3356d7c751c791\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 23:46:37.432276 kubelet[2431]: E1104 23:46:37.432180 2431 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:46:37.481453 containerd[1601]: time="2025-11-04T23:46:37.481377120Z" level=info msg="Container 7388b4781c99f364abd24be95f63de23889e69e8f426a537e8778133de456b30: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:46:37.483516 kubelet[2431]: E1104 23:46:37.482921 2431 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:46:37.484844 containerd[1601]: time="2025-11-04T23:46:37.484804078Z" level=info msg="Container 1de4756460c411964a2fc22f647166555e33402407d26fe404f697904f6d5fe4: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:46:37.487987 containerd[1601]: time="2025-11-04T23:46:37.487955623Z" level=info msg="Container 46af6424b76386c4c807419cf82960439b1bd7511e738959d3c85c266389067c: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:46:37.493152 containerd[1601]: time="2025-11-04T23:46:37.493104793Z" level=info msg="CreateContainer within sandbox \"11eec7a5cd4d3f12dafe4e3847a74f6c1cac8e0f7f0b61795d659338cdf94e2d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7388b4781c99f364abd24be95f63de23889e69e8f426a537e8778133de456b30\"" Nov 4 23:46:37.493896 containerd[1601]: time="2025-11-04T23:46:37.493866762Z" level=info msg="StartContainer for \"7388b4781c99f364abd24be95f63de23889e69e8f426a537e8778133de456b30\"" Nov 4 23:46:37.495964 containerd[1601]: time="2025-11-04T23:46:37.495787351Z" level=info msg="connecting to shim 7388b4781c99f364abd24be95f63de23889e69e8f426a537e8778133de456b30" address="unix:///run/containerd/s/1ccac02c0b45e581097abbd0da8179702b10d6ae38831765def6ab7cde8f2ee3" protocol=ttrpc version=3 Nov 4 23:46:37.499836 containerd[1601]: time="2025-11-04T23:46:37.499495613Z" level=info msg="CreateContainer within sandbox \"f757d48ee0c6095f5611f5cfdaf09bb258f41637364748867a3356d7c751c791\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"46af6424b76386c4c807419cf82960439b1bd7511e738959d3c85c266389067c\"" Nov 4 23:46:37.500957 containerd[1601]: time="2025-11-04T23:46:37.500801407Z" level=info msg="StartContainer for \"46af6424b76386c4c807419cf82960439b1bd7511e738959d3c85c266389067c\"" Nov 4 23:46:37.502049 containerd[1601]: time="2025-11-04T23:46:37.502003317Z" level=info msg="connecting to shim 46af6424b76386c4c807419cf82960439b1bd7511e738959d3c85c266389067c" address="unix:///run/containerd/s/7db704b8dd7f55c07f37ee26af1351c8b372126169b11c5b557b96368cf6d33e" protocol=ttrpc version=3 Nov 4 23:46:37.503037 containerd[1601]: time="2025-11-04T23:46:37.502986584Z" level=info msg="CreateContainer within sandbox \"8e2bce08c1c6d17f0bfe029d1adcbf6f2fe7f63672e84a3dc9296ffe9094921e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1de4756460c411964a2fc22f647166555e33402407d26fe404f697904f6d5fe4\"" Nov 4 23:46:37.503743 containerd[1601]: time="2025-11-04T23:46:37.503713704Z" level=info msg="StartContainer for \"1de4756460c411964a2fc22f647166555e33402407d26fe404f697904f6d5fe4\"" Nov 4 23:46:37.504867 containerd[1601]: time="2025-11-04T23:46:37.504831293Z" level=info msg="connecting to shim 1de4756460c411964a2fc22f647166555e33402407d26fe404f697904f6d5fe4" address="unix:///run/containerd/s/a49359d20f3cfe5bedef142baca94cbc606c14310b5f207988d6038f7f4571a8" protocol=ttrpc version=3 Nov 4 23:46:37.525207 systemd[1]: Started cri-containerd-7388b4781c99f364abd24be95f63de23889e69e8f426a537e8778133de456b30.scope - libcontainer container 7388b4781c99f364abd24be95f63de23889e69e8f426a537e8778133de456b30. Nov 4 23:46:37.539107 systemd[1]: Started cri-containerd-46af6424b76386c4c807419cf82960439b1bd7511e738959d3c85c266389067c.scope - libcontainer container 46af6424b76386c4c807419cf82960439b1bd7511e738959d3c85c266389067c. Nov 4 23:46:37.543868 systemd[1]: Started cri-containerd-1de4756460c411964a2fc22f647166555e33402407d26fe404f697904f6d5fe4.scope - libcontainer container 1de4756460c411964a2fc22f647166555e33402407d26fe404f697904f6d5fe4. Nov 4 23:46:37.607973 containerd[1601]: time="2025-11-04T23:46:37.607927319Z" level=info msg="StartContainer for \"7388b4781c99f364abd24be95f63de23889e69e8f426a537e8778133de456b30\" returns successfully" Nov 4 23:46:37.632437 containerd[1601]: time="2025-11-04T23:46:37.632385657Z" level=info msg="StartContainer for \"46af6424b76386c4c807419cf82960439b1bd7511e738959d3c85c266389067c\" returns successfully" Nov 4 23:46:37.712467 containerd[1601]: time="2025-11-04T23:46:37.712311640Z" level=info msg="StartContainer for \"1de4756460c411964a2fc22f647166555e33402407d26fe404f697904f6d5fe4\" returns successfully" Nov 4 23:46:37.960255 kubelet[2431]: I1104 23:46:37.960203 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:46:38.420437 kubelet[2431]: E1104 23:46:38.420386 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:46:38.420684 kubelet[2431]: E1104 23:46:38.420612 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:38.423678 kubelet[2431]: E1104 23:46:38.423648 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:46:38.423878 kubelet[2431]: E1104 23:46:38.423846 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:38.427328 kubelet[2431]: E1104 23:46:38.427301 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:46:38.427458 kubelet[2431]: E1104 23:46:38.427438 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:39.485964 kubelet[2431]: E1104 23:46:39.485917 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:46:39.486458 kubelet[2431]: E1104 23:46:39.486116 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:39.486458 kubelet[2431]: E1104 23:46:39.486189 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:46:39.486458 kubelet[2431]: E1104 23:46:39.486306 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:39.486458 kubelet[2431]: E1104 23:46:39.486452 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:46:39.486624 kubelet[2431]: E1104 23:46:39.486586 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:40.011137 kubelet[2431]: E1104 23:46:40.011073 2431 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 4 23:46:40.190824 kubelet[2431]: I1104 23:46:40.190733 2431 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 23:46:40.257575 kubelet[2431]: I1104 23:46:40.257504 2431 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:46:40.271750 kubelet[2431]: E1104 23:46:40.271589 2431 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:46:40.271750 kubelet[2431]: I1104 23:46:40.271638 2431 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:46:40.273712 kubelet[2431]: E1104 23:46:40.273676 2431 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 23:46:40.273712 kubelet[2431]: I1104 23:46:40.273715 2431 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:46:40.276397 kubelet[2431]: E1104 23:46:40.276340 2431 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 23:46:40.484153 kubelet[2431]: I1104 23:46:40.484082 2431 apiserver.go:52] "Watching apiserver" Nov 4 23:46:40.486226 kubelet[2431]: I1104 23:46:40.486190 2431 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:46:40.487256 kubelet[2431]: I1104 23:46:40.486496 2431 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:46:40.489822 kubelet[2431]: E1104 23:46:40.489784 2431 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 23:46:40.489958 kubelet[2431]: E1104 23:46:40.489882 2431 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 23:46:40.490077 kubelet[2431]: E1104 23:46:40.490057 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:40.490237 kubelet[2431]: E1104 23:46:40.490217 2431 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:40.556739 kubelet[2431]: I1104 23:46:40.556578 2431 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:46:42.206414 systemd[1]: Reload requested from client PID 2732 ('systemctl') (unit session-9.scope)... Nov 4 23:46:42.206438 systemd[1]: Reloading... Nov 4 23:46:42.319994 zram_generator::config[2779]: No configuration found. Nov 4 23:46:42.709337 systemd[1]: Reloading finished in 502 ms. Nov 4 23:46:42.738378 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:46:42.769658 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 23:46:42.770229 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:46:42.770312 systemd[1]: kubelet.service: Consumed 1.234s CPU time, 130.1M memory peak. Nov 4 23:46:42.773256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:46:43.025013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:46:43.046449 (kubelet)[2821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:46:43.103194 kubelet[2821]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:46:43.103194 kubelet[2821]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:46:43.103194 kubelet[2821]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:46:43.103732 kubelet[2821]: I1104 23:46:43.103230 2821 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:46:43.110862 kubelet[2821]: I1104 23:46:43.110811 2821 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 23:46:43.110862 kubelet[2821]: I1104 23:46:43.110845 2821 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:46:43.111190 kubelet[2821]: I1104 23:46:43.111162 2821 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:46:43.112524 kubelet[2821]: I1104 23:46:43.112498 2821 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 23:46:43.116289 kubelet[2821]: I1104 23:46:43.116248 2821 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:46:43.122328 kubelet[2821]: I1104 23:46:43.122283 2821 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:46:43.128800 kubelet[2821]: I1104 23:46:43.128734 2821 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:46:43.129107 kubelet[2821]: I1104 23:46:43.129051 2821 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:46:43.129235 kubelet[2821]: I1104 23:46:43.129093 2821 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:46:43.129359 kubelet[2821]: I1104 23:46:43.129237 2821 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:46:43.129359 kubelet[2821]: I1104 23:46:43.129246 2821 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 23:46:43.129359 kubelet[2821]: I1104 23:46:43.129297 2821 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:46:43.129535 kubelet[2821]: I1104 23:46:43.129500 2821 kubelet.go:480] "Attempting to sync node with API server" Nov 4 23:46:43.129535 kubelet[2821]: I1104 23:46:43.129523 2821 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:46:43.129535 kubelet[2821]: I1104 23:46:43.129553 2821 kubelet.go:386] "Adding apiserver pod source" Nov 4 23:46:43.129790 kubelet[2821]: I1104 23:46:43.129574 2821 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:46:43.131191 kubelet[2821]: I1104 23:46:43.131150 2821 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:46:43.131629 kubelet[2821]: I1104 23:46:43.131599 2821 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:46:43.139812 kubelet[2821]: I1104 23:46:43.139490 2821 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:46:43.139812 kubelet[2821]: I1104 23:46:43.139550 2821 server.go:1289] "Started kubelet" Nov 4 23:46:43.140818 kubelet[2821]: I1104 23:46:43.140342 2821 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:46:43.141238 kubelet[2821]: I1104 23:46:43.140243 2821 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:46:43.141314 kubelet[2821]: I1104 23:46:43.141270 2821 server.go:317] "Adding debug handlers to kubelet server" Nov 4 23:46:43.143058 kubelet[2821]: I1104 23:46:43.142520 2821 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:46:43.143262 kubelet[2821]: I1104 23:46:43.141165 2821 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:46:43.144072 kubelet[2821]: I1104 23:46:43.144035 2821 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:46:43.144698 kubelet[2821]: I1104 23:46:43.144668 2821 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:46:43.144855 kubelet[2821]: I1104 23:46:43.144819 2821 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:46:43.146215 kubelet[2821]: I1104 23:46:43.146190 2821 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:46:43.146483 kubelet[2821]: I1104 23:46:43.146457 2821 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:46:43.146657 kubelet[2821]: I1104 23:46:43.146624 2821 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:46:43.150173 kubelet[2821]: E1104 23:46:43.150134 2821 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:46:43.152428 kubelet[2821]: I1104 23:46:43.152395 2821 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:46:43.161290 kubelet[2821]: I1104 23:46:43.161178 2821 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 23:46:43.163380 kubelet[2821]: I1104 23:46:43.162753 2821 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 23:46:43.163380 kubelet[2821]: I1104 23:46:43.162794 2821 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 23:46:43.163380 kubelet[2821]: I1104 23:46:43.162838 2821 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:46:43.163380 kubelet[2821]: I1104 23:46:43.162852 2821 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 23:46:43.163380 kubelet[2821]: E1104 23:46:43.162951 2821 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:46:43.192623 kubelet[2821]: I1104 23:46:43.192584 2821 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:46:43.192623 kubelet[2821]: I1104 23:46:43.192604 2821 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:46:43.192623 kubelet[2821]: I1104 23:46:43.192625 2821 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:46:43.192879 kubelet[2821]: I1104 23:46:43.192758 2821 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 23:46:43.192879 kubelet[2821]: I1104 23:46:43.192770 2821 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 23:46:43.192879 kubelet[2821]: I1104 23:46:43.192793 2821 policy_none.go:49] "None policy: Start" Nov 4 23:46:43.192879 kubelet[2821]: I1104 23:46:43.192805 2821 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:46:43.192879 kubelet[2821]: I1104 23:46:43.192819 2821 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:46:43.193139 kubelet[2821]: I1104 23:46:43.193069 2821 state_mem.go:75] "Updated machine memory state" Nov 4 23:46:43.198300 kubelet[2821]: E1104 23:46:43.198269 2821 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:46:43.198490 kubelet[2821]: I1104 23:46:43.198469 2821 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:46:43.198530 kubelet[2821]: I1104 23:46:43.198490 2821 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:46:43.199757 kubelet[2821]: I1104 23:46:43.199405 2821 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:46:43.201006 kubelet[2821]: E1104 23:46:43.200890 2821 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:46:43.265030 kubelet[2821]: I1104 23:46:43.264888 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:46:43.265230 kubelet[2821]: I1104 23:46:43.265073 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:46:43.265393 kubelet[2821]: I1104 23:46:43.265352 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:46:43.308726 kubelet[2821]: I1104 23:46:43.308580 2821 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:46:43.319680 kubelet[2821]: I1104 23:46:43.319591 2821 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 4 23:46:43.319680 kubelet[2821]: I1104 23:46:43.319706 2821 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 23:46:43.347080 kubelet[2821]: I1104 23:46:43.346999 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1ab91c97c8370a1a68455963c4eb533-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1ab91c97c8370a1a68455963c4eb533\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:46:43.347080 kubelet[2821]: I1104 23:46:43.347067 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1ab91c97c8370a1a68455963c4eb533-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b1ab91c97c8370a1a68455963c4eb533\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:46:43.347080 kubelet[2821]: I1104 23:46:43.347095 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:46:43.347338 kubelet[2821]: I1104 23:46:43.347166 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 4 23:46:43.347338 kubelet[2821]: I1104 23:46:43.347244 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1ab91c97c8370a1a68455963c4eb533-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b1ab91c97c8370a1a68455963c4eb533\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:46:43.347338 kubelet[2821]: I1104 23:46:43.347283 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:46:43.347338 kubelet[2821]: I1104 23:46:43.347302 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:46:43.347338 kubelet[2821]: I1104 23:46:43.347326 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:46:43.347550 kubelet[2821]: I1104 23:46:43.347364 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:46:43.574351 kubelet[2821]: E1104 23:46:43.573988 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:43.574975 kubelet[2821]: E1104 23:46:43.574037 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:43.575129 kubelet[2821]: E1104 23:46:43.574212 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:44.130579 kubelet[2821]: I1104 23:46:44.130513 2821 apiserver.go:52] "Watching apiserver" Nov 4 23:46:44.145398 kubelet[2821]: I1104 23:46:44.145331 2821 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:46:44.176768 kubelet[2821]: I1104 23:46:44.176576 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:46:44.176768 kubelet[2821]: I1104 23:46:44.176603 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:46:44.176768 kubelet[2821]: I1104 23:46:44.176653 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:46:44.183850 kubelet[2821]: E1104 23:46:44.183784 2821 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 4 23:46:44.184108 kubelet[2821]: E1104 23:46:44.184078 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:44.184666 kubelet[2821]: E1104 23:46:44.184631 2821 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 4 23:46:44.184770 kubelet[2821]: E1104 23:46:44.184740 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:44.184916 kubelet[2821]: E1104 23:46:44.184886 2821 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:46:44.185040 kubelet[2821]: E1104 23:46:44.185014 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:44.197877 kubelet[2821]: I1104 23:46:44.197792 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.197744053 podStartE2EDuration="1.197744053s" podCreationTimestamp="2025-11-04 23:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:46:44.197548172 +0000 UTC m=+1.144803431" watchObservedRunningTime="2025-11-04 23:46:44.197744053 +0000 UTC m=+1.144999302" Nov 4 23:46:44.219353 kubelet[2821]: I1104 23:46:44.219282 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.219258686 podStartE2EDuration="1.219258686s" podCreationTimestamp="2025-11-04 23:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:46:44.209596147 +0000 UTC m=+1.156851426" watchObservedRunningTime="2025-11-04 23:46:44.219258686 +0000 UTC m=+1.166513945" Nov 4 23:46:45.177848 kubelet[2821]: E1104 23:46:45.177785 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:45.177848 kubelet[2821]: E1104 23:46:45.177847 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:45.178363 kubelet[2821]: E1104 23:46:45.178142 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:49.290945 kubelet[2821]: I1104 23:46:49.290825 2821 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 23:46:49.291711 containerd[1601]: time="2025-11-04T23:46:49.291625525Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 23:46:49.292065 kubelet[2821]: I1104 23:46:49.291990 2821 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 23:46:49.980764 kubelet[2821]: I1104 23:46:49.980674 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.9806494489999995 podStartE2EDuration="6.980649449s" podCreationTimestamp="2025-11-04 23:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:46:44.224957503 +0000 UTC m=+1.172212782" watchObservedRunningTime="2025-11-04 23:46:49.980649449 +0000 UTC m=+6.927904708" Nov 4 23:46:49.991079 systemd[1]: Created slice kubepods-besteffort-pod937014ae_4a3e_49f4_855b_1f24bafe65a9.slice - libcontainer container kubepods-besteffort-pod937014ae_4a3e_49f4_855b_1f24bafe65a9.slice. Nov 4 23:46:50.090273 kubelet[2821]: I1104 23:46:50.090189 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/937014ae-4a3e-49f4-855b-1f24bafe65a9-kube-proxy\") pod \"kube-proxy-2qrkm\" (UID: \"937014ae-4a3e-49f4-855b-1f24bafe65a9\") " pod="kube-system/kube-proxy-2qrkm" Nov 4 23:46:50.090273 kubelet[2821]: I1104 23:46:50.090263 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vvvm\" (UniqueName: \"kubernetes.io/projected/937014ae-4a3e-49f4-855b-1f24bafe65a9-kube-api-access-9vvvm\") pod \"kube-proxy-2qrkm\" (UID: \"937014ae-4a3e-49f4-855b-1f24bafe65a9\") " pod="kube-system/kube-proxy-2qrkm" Nov 4 23:46:50.090273 kubelet[2821]: I1104 23:46:50.090292 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/937014ae-4a3e-49f4-855b-1f24bafe65a9-xtables-lock\") pod \"kube-proxy-2qrkm\" (UID: \"937014ae-4a3e-49f4-855b-1f24bafe65a9\") " pod="kube-system/kube-proxy-2qrkm" Nov 4 23:46:50.090549 kubelet[2821]: I1104 23:46:50.090333 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/937014ae-4a3e-49f4-855b-1f24bafe65a9-lib-modules\") pod \"kube-proxy-2qrkm\" (UID: \"937014ae-4a3e-49f4-855b-1f24bafe65a9\") " pod="kube-system/kube-proxy-2qrkm" Nov 4 23:46:50.197518 kubelet[2821]: E1104 23:46:50.197464 2821 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 4 23:46:50.197518 kubelet[2821]: E1104 23:46:50.197516 2821 projected.go:194] Error preparing data for projected volume kube-api-access-9vvvm for pod kube-system/kube-proxy-2qrkm: configmap "kube-root-ca.crt" not found Nov 4 23:46:50.197780 kubelet[2821]: E1104 23:46:50.197631 2821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/937014ae-4a3e-49f4-855b-1f24bafe65a9-kube-api-access-9vvvm podName:937014ae-4a3e-49f4-855b-1f24bafe65a9 nodeName:}" failed. No retries permitted until 2025-11-04 23:46:50.697596775 +0000 UTC m=+7.644852034 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9vvvm" (UniqueName: "kubernetes.io/projected/937014ae-4a3e-49f4-855b-1f24bafe65a9-kube-api-access-9vvvm") pod "kube-proxy-2qrkm" (UID: "937014ae-4a3e-49f4-855b-1f24bafe65a9") : configmap "kube-root-ca.crt" not found Nov 4 23:46:50.410395 systemd[1]: Created slice kubepods-besteffort-pod9f74f571_f02d_4359_aeed_f89df1bc0679.slice - libcontainer container kubepods-besteffort-pod9f74f571_f02d_4359_aeed_f89df1bc0679.slice. Nov 4 23:46:50.494126 kubelet[2821]: I1104 23:46:50.494047 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9f74f571-f02d-4359-aeed-f89df1bc0679-var-lib-calico\") pod \"tigera-operator-7dcd859c48-v5j5w\" (UID: \"9f74f571-f02d-4359-aeed-f89df1bc0679\") " pod="tigera-operator/tigera-operator-7dcd859c48-v5j5w" Nov 4 23:46:50.494126 kubelet[2821]: I1104 23:46:50.494104 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnxq2\" (UniqueName: \"kubernetes.io/projected/9f74f571-f02d-4359-aeed-f89df1bc0679-kube-api-access-vnxq2\") pod \"tigera-operator-7dcd859c48-v5j5w\" (UID: \"9f74f571-f02d-4359-aeed-f89df1bc0679\") " pod="tigera-operator/tigera-operator-7dcd859c48-v5j5w" Nov 4 23:46:50.714655 containerd[1601]: time="2025-11-04T23:46:50.714476015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v5j5w,Uid:9f74f571-f02d-4359-aeed-f89df1bc0679,Namespace:tigera-operator,Attempt:0,}" Nov 4 23:46:50.773843 containerd[1601]: time="2025-11-04T23:46:50.773787243Z" level=info msg="connecting to shim 7cd7ad64c3e6c1fc57d89672ca1724821ef74550294bf038f2e3ee10e7888913" address="unix:///run/containerd/s/b8ec1e44d95a516e5bf4027a93b588efa2376ae289dcddbcc5f72fc89bf69a0c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:46:50.858138 systemd[1]: Started cri-containerd-7cd7ad64c3e6c1fc57d89672ca1724821ef74550294bf038f2e3ee10e7888913.scope - libcontainer container 7cd7ad64c3e6c1fc57d89672ca1724821ef74550294bf038f2e3ee10e7888913. Nov 4 23:46:50.903822 kubelet[2821]: E1104 23:46:50.903749 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:50.905743 containerd[1601]: time="2025-11-04T23:46:50.905692267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2qrkm,Uid:937014ae-4a3e-49f4-855b-1f24bafe65a9,Namespace:kube-system,Attempt:0,}" Nov 4 23:46:50.915290 containerd[1601]: time="2025-11-04T23:46:50.915230992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v5j5w,Uid:9f74f571-f02d-4359-aeed-f89df1bc0679,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7cd7ad64c3e6c1fc57d89672ca1724821ef74550294bf038f2e3ee10e7888913\"" Nov 4 23:46:50.917757 containerd[1601]: time="2025-11-04T23:46:50.917599234Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 23:46:50.938319 containerd[1601]: time="2025-11-04T23:46:50.938244179Z" level=info msg="connecting to shim 1410bb30aecd5faae184ce7d04571b9117cdc9c284be9261eab4d5438c5ef37b" address="unix:///run/containerd/s/65389fe0b15d54e91d6743427a225671f729a2f95a3c889342342bb79f57ba18" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:46:50.974042 systemd[1]: Started cri-containerd-1410bb30aecd5faae184ce7d04571b9117cdc9c284be9261eab4d5438c5ef37b.scope - libcontainer container 1410bb30aecd5faae184ce7d04571b9117cdc9c284be9261eab4d5438c5ef37b. Nov 4 23:46:51.010724 containerd[1601]: time="2025-11-04T23:46:51.010664016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2qrkm,Uid:937014ae-4a3e-49f4-855b-1f24bafe65a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1410bb30aecd5faae184ce7d04571b9117cdc9c284be9261eab4d5438c5ef37b\"" Nov 4 23:46:51.011683 kubelet[2821]: E1104 23:46:51.011627 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:51.019577 containerd[1601]: time="2025-11-04T23:46:51.019507048Z" level=info msg="CreateContainer within sandbox \"1410bb30aecd5faae184ce7d04571b9117cdc9c284be9261eab4d5438c5ef37b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 23:46:51.032983 containerd[1601]: time="2025-11-04T23:46:51.032929417Z" level=info msg="Container 22f836bd528ded164458df8db5d8bc9947235df81f35aa24caf3c69b4c24576a: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:46:51.043035 containerd[1601]: time="2025-11-04T23:46:51.043001705Z" level=info msg="CreateContainer within sandbox \"1410bb30aecd5faae184ce7d04571b9117cdc9c284be9261eab4d5438c5ef37b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"22f836bd528ded164458df8db5d8bc9947235df81f35aa24caf3c69b4c24576a\"" Nov 4 23:46:51.043956 containerd[1601]: time="2025-11-04T23:46:51.043840736Z" level=info msg="StartContainer for \"22f836bd528ded164458df8db5d8bc9947235df81f35aa24caf3c69b4c24576a\"" Nov 4 23:46:51.047052 containerd[1601]: time="2025-11-04T23:46:51.047005492Z" level=info msg="connecting to shim 22f836bd528ded164458df8db5d8bc9947235df81f35aa24caf3c69b4c24576a" address="unix:///run/containerd/s/65389fe0b15d54e91d6743427a225671f729a2f95a3c889342342bb79f57ba18" protocol=ttrpc version=3 Nov 4 23:46:51.070063 systemd[1]: Started cri-containerd-22f836bd528ded164458df8db5d8bc9947235df81f35aa24caf3c69b4c24576a.scope - libcontainer container 22f836bd528ded164458df8db5d8bc9947235df81f35aa24caf3c69b4c24576a. Nov 4 23:46:51.122217 containerd[1601]: time="2025-11-04T23:46:51.122148730Z" level=info msg="StartContainer for \"22f836bd528ded164458df8db5d8bc9947235df81f35aa24caf3c69b4c24576a\" returns successfully" Nov 4 23:46:51.195207 kubelet[2821]: E1104 23:46:51.195162 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:52.214278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2231324955.mount: Deactivated successfully. Nov 4 23:46:52.697543 containerd[1601]: time="2025-11-04T23:46:52.697473012Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:52.698406 containerd[1601]: time="2025-11-04T23:46:52.698357071Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 4 23:46:52.699580 containerd[1601]: time="2025-11-04T23:46:52.699548961Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:52.702038 containerd[1601]: time="2025-11-04T23:46:52.701983155Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:46:52.702525 containerd[1601]: time="2025-11-04T23:46:52.702491903Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.784847434s" Nov 4 23:46:52.702525 containerd[1601]: time="2025-11-04T23:46:52.702520192Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 4 23:46:52.707515 containerd[1601]: time="2025-11-04T23:46:52.707480351Z" level=info msg="CreateContainer within sandbox \"7cd7ad64c3e6c1fc57d89672ca1724821ef74550294bf038f2e3ee10e7888913\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 23:46:52.717688 containerd[1601]: time="2025-11-04T23:46:52.717626992Z" level=info msg="Container 0def9331c9ab70e26b3d931e8792777e782da3c1909c285823fecfad1e61166c: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:46:52.721952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4216162255.mount: Deactivated successfully. Nov 4 23:46:52.724468 containerd[1601]: time="2025-11-04T23:46:52.724412389Z" level=info msg="CreateContainer within sandbox \"7cd7ad64c3e6c1fc57d89672ca1724821ef74550294bf038f2e3ee10e7888913\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0def9331c9ab70e26b3d931e8792777e782da3c1909c285823fecfad1e61166c\"" Nov 4 23:46:52.725180 containerd[1601]: time="2025-11-04T23:46:52.725046818Z" level=info msg="StartContainer for \"0def9331c9ab70e26b3d931e8792777e782da3c1909c285823fecfad1e61166c\"" Nov 4 23:46:52.726228 containerd[1601]: time="2025-11-04T23:46:52.726180547Z" level=info msg="connecting to shim 0def9331c9ab70e26b3d931e8792777e782da3c1909c285823fecfad1e61166c" address="unix:///run/containerd/s/b8ec1e44d95a516e5bf4027a93b588efa2376ae289dcddbcc5f72fc89bf69a0c" protocol=ttrpc version=3 Nov 4 23:46:52.760070 systemd[1]: Started cri-containerd-0def9331c9ab70e26b3d931e8792777e782da3c1909c285823fecfad1e61166c.scope - libcontainer container 0def9331c9ab70e26b3d931e8792777e782da3c1909c285823fecfad1e61166c. Nov 4 23:46:52.800131 containerd[1601]: time="2025-11-04T23:46:52.800074230Z" level=info msg="StartContainer for \"0def9331c9ab70e26b3d931e8792777e782da3c1909c285823fecfad1e61166c\" returns successfully" Nov 4 23:46:53.211096 kubelet[2821]: I1104 23:46:53.211025 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2qrkm" podStartSLOduration=4.210998993 podStartE2EDuration="4.210998993s" podCreationTimestamp="2025-11-04 23:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:46:51.207549782 +0000 UTC m=+8.154805041" watchObservedRunningTime="2025-11-04 23:46:53.210998993 +0000 UTC m=+10.158254262" Nov 4 23:46:53.212193 kubelet[2821]: I1104 23:46:53.211137 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-v5j5w" podStartSLOduration=1.424979952 podStartE2EDuration="3.211131678s" podCreationTimestamp="2025-11-04 23:46:50 +0000 UTC" firstStartedPulling="2025-11-04 23:46:50.917161607 +0000 UTC m=+7.864416866" lastFinishedPulling="2025-11-04 23:46:52.703313332 +0000 UTC m=+9.650568592" observedRunningTime="2025-11-04 23:46:53.210847077 +0000 UTC m=+10.158102336" watchObservedRunningTime="2025-11-04 23:46:53.211131678 +0000 UTC m=+10.158386937" Nov 4 23:46:53.529974 kubelet[2821]: E1104 23:46:53.529815 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:53.759466 kubelet[2821]: E1104 23:46:53.759424 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:53.814595 kubelet[2821]: E1104 23:46:53.814284 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:54.208043 kubelet[2821]: E1104 23:46:54.206852 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:54.209251 kubelet[2821]: E1104 23:46:54.209112 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:46:58.826021 sudo[1833]: pam_unix(sudo:session): session closed for user root Nov 4 23:46:58.828929 sshd[1832]: Connection closed by 10.0.0.1 port 45584 Nov 4 23:46:58.828986 sshd-session[1829]: pam_unix(sshd:session): session closed for user core Nov 4 23:46:58.837564 systemd[1]: sshd@8-10.0.0.25:22-10.0.0.1:45584.service: Deactivated successfully. Nov 4 23:46:58.843577 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 23:46:58.843978 systemd[1]: session-9.scope: Consumed 7.233s CPU time, 213M memory peak. Nov 4 23:46:58.852284 systemd-logind[1579]: Session 9 logged out. Waiting for processes to exit. Nov 4 23:46:58.856154 systemd-logind[1579]: Removed session 9. Nov 4 23:47:04.878171 systemd[1]: Created slice kubepods-besteffort-podc8830b46_baab_4841_a126_bd41f25db7d3.slice - libcontainer container kubepods-besteffort-podc8830b46_baab_4841_a126_bd41f25db7d3.slice. Nov 4 23:47:04.894059 kubelet[2821]: I1104 23:47:04.893995 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md9cz\" (UniqueName: \"kubernetes.io/projected/c8830b46-baab-4841-a126-bd41f25db7d3-kube-api-access-md9cz\") pod \"calico-typha-686b5d9d8c-t2n7r\" (UID: \"c8830b46-baab-4841-a126-bd41f25db7d3\") " pod="calico-system/calico-typha-686b5d9d8c-t2n7r" Nov 4 23:47:04.894059 kubelet[2821]: I1104 23:47:04.894046 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c8830b46-baab-4841-a126-bd41f25db7d3-typha-certs\") pod \"calico-typha-686b5d9d8c-t2n7r\" (UID: \"c8830b46-baab-4841-a126-bd41f25db7d3\") " pod="calico-system/calico-typha-686b5d9d8c-t2n7r" Nov 4 23:47:04.894059 kubelet[2821]: I1104 23:47:04.894070 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8830b46-baab-4841-a126-bd41f25db7d3-tigera-ca-bundle\") pod \"calico-typha-686b5d9d8c-t2n7r\" (UID: \"c8830b46-baab-4841-a126-bd41f25db7d3\") " pod="calico-system/calico-typha-686b5d9d8c-t2n7r" Nov 4 23:47:05.073992 systemd[1]: Created slice kubepods-besteffort-pod3e14ecb1_ff1c_4c93_9ea2_656fda908707.slice - libcontainer container kubepods-besteffort-pod3e14ecb1_ff1c_4c93_9ea2_656fda908707.slice. Nov 4 23:47:05.095091 kubelet[2821]: I1104 23:47:05.095028 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e14ecb1-ff1c-4c93-9ea2-656fda908707-tigera-ca-bundle\") pod \"calico-node-ftmnr\" (UID: \"3e14ecb1-ff1c-4c93-9ea2-656fda908707\") " pod="calico-system/calico-node-ftmnr" Nov 4 23:47:05.095091 kubelet[2821]: I1104 23:47:05.095079 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3e14ecb1-ff1c-4c93-9ea2-656fda908707-var-lib-calico\") pod \"calico-node-ftmnr\" (UID: \"3e14ecb1-ff1c-4c93-9ea2-656fda908707\") " pod="calico-system/calico-node-ftmnr" Nov 4 23:47:05.095312 kubelet[2821]: I1104 23:47:05.095098 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3e14ecb1-ff1c-4c93-9ea2-656fda908707-cni-net-dir\") pod \"calico-node-ftmnr\" (UID: \"3e14ecb1-ff1c-4c93-9ea2-656fda908707\") " pod="calico-system/calico-node-ftmnr" Nov 4 23:47:05.095312 kubelet[2821]: I1104 23:47:05.095185 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e14ecb1-ff1c-4c93-9ea2-656fda908707-xtables-lock\") pod \"calico-node-ftmnr\" (UID: \"3e14ecb1-ff1c-4c93-9ea2-656fda908707\") " pod="calico-system/calico-node-ftmnr" Nov 4 23:47:05.095312 kubelet[2821]: I1104 23:47:05.095232 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3e14ecb1-ff1c-4c93-9ea2-656fda908707-cni-log-dir\") pod \"calico-node-ftmnr\" (UID: \"3e14ecb1-ff1c-4c93-9ea2-656fda908707\") " pod="calico-system/calico-node-ftmnr" Nov 4 23:47:05.095312 kubelet[2821]: I1104 23:47:05.095247 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e14ecb1-ff1c-4c93-9ea2-656fda908707-lib-modules\") pod \"calico-node-ftmnr\" (UID: \"3e14ecb1-ff1c-4c93-9ea2-656fda908707\") " pod="calico-system/calico-node-ftmnr" Nov 4 23:47:05.095312 kubelet[2821]: I1104 23:47:05.095261 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3e14ecb1-ff1c-4c93-9ea2-656fda908707-cni-bin-dir\") pod \"calico-node-ftmnr\" (UID: \"3e14ecb1-ff1c-4c93-9ea2-656fda908707\") " pod="calico-system/calico-node-ftmnr" Nov 4 23:47:05.095430 kubelet[2821]: I1104 23:47:05.095303 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3e14ecb1-ff1c-4c93-9ea2-656fda908707-flexvol-driver-host\") pod \"calico-node-ftmnr\" (UID: \"3e14ecb1-ff1c-4c93-9ea2-656fda908707\") " pod="calico-system/calico-node-ftmnr" Nov 4 23:47:05.095430 kubelet[2821]: I1104 23:47:05.095321 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tswms\" (UniqueName: \"kubernetes.io/projected/3e14ecb1-ff1c-4c93-9ea2-656fda908707-kube-api-access-tswms\") pod \"calico-node-ftmnr\" (UID: \"3e14ecb1-ff1c-4c93-9ea2-656fda908707\") " pod="calico-system/calico-node-ftmnr" Nov 4 23:47:05.095430 kubelet[2821]: I1104 23:47:05.095368 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3e14ecb1-ff1c-4c93-9ea2-656fda908707-var-run-calico\") pod \"calico-node-ftmnr\" (UID: \"3e14ecb1-ff1c-4c93-9ea2-656fda908707\") " pod="calico-system/calico-node-ftmnr" Nov 4 23:47:05.095514 kubelet[2821]: I1104 23:47:05.095437 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3e14ecb1-ff1c-4c93-9ea2-656fda908707-node-certs\") pod \"calico-node-ftmnr\" (UID: \"3e14ecb1-ff1c-4c93-9ea2-656fda908707\") " pod="calico-system/calico-node-ftmnr" Nov 4 23:47:05.095514 kubelet[2821]: I1104 23:47:05.095477 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3e14ecb1-ff1c-4c93-9ea2-656fda908707-policysync\") pod \"calico-node-ftmnr\" (UID: \"3e14ecb1-ff1c-4c93-9ea2-656fda908707\") " pod="calico-system/calico-node-ftmnr" Nov 4 23:47:05.181523 kubelet[2821]: E1104 23:47:05.181384 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:05.182175 containerd[1601]: time="2025-11-04T23:47:05.182111998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-686b5d9d8c-t2n7r,Uid:c8830b46-baab-4841-a126-bd41f25db7d3,Namespace:calico-system,Attempt:0,}" Nov 4 23:47:05.201616 kubelet[2821]: E1104 23:47:05.201410 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.201616 kubelet[2821]: W1104 23:47:05.201432 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.205526 kubelet[2821]: E1104 23:47:05.205383 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.205775 kubelet[2821]: E1104 23:47:05.205656 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.205775 kubelet[2821]: W1104 23:47:05.205671 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.205775 kubelet[2821]: E1104 23:47:05.205681 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.207072 kubelet[2821]: E1104 23:47:05.207049 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.207231 kubelet[2821]: W1104 23:47:05.207114 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.207231 kubelet[2821]: E1104 23:47:05.207139 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.219380 containerd[1601]: time="2025-11-04T23:47:05.219242155Z" level=info msg="connecting to shim e74f0473c4ac678a31cb105eaf351813a20df157f97bd702093b4dfffc17e280" address="unix:///run/containerd/s/11cde72c7a6defd0a8b3c10439dde83b4edf29723ae0f811d3182d076e9f63e5" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:47:05.247867 kubelet[2821]: E1104 23:47:05.247822 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qfh2g" podUID="b6136b3a-c7e7-4b68-a7f8-18b611db9e11" Nov 4 23:47:05.260100 systemd[1]: Started cri-containerd-e74f0473c4ac678a31cb105eaf351813a20df157f97bd702093b4dfffc17e280.scope - libcontainer container e74f0473c4ac678a31cb105eaf351813a20df157f97bd702093b4dfffc17e280. Nov 4 23:47:05.276569 kubelet[2821]: E1104 23:47:05.276371 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.276569 kubelet[2821]: W1104 23:47:05.276440 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.276569 kubelet[2821]: E1104 23:47:05.276473 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.277275 kubelet[2821]: E1104 23:47:05.277161 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.277275 kubelet[2821]: W1104 23:47:05.277202 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.277275 kubelet[2821]: E1104 23:47:05.277216 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.277713 kubelet[2821]: E1104 23:47:05.277698 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.277890 kubelet[2821]: W1104 23:47:05.277814 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.277890 kubelet[2821]: E1104 23:47:05.277835 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.278478 kubelet[2821]: E1104 23:47:05.278365 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.278478 kubelet[2821]: W1104 23:47:05.278379 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.278478 kubelet[2821]: E1104 23:47:05.278405 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.278872 kubelet[2821]: E1104 23:47:05.278849 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.279045 kubelet[2821]: W1104 23:47:05.278947 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.279045 kubelet[2821]: E1104 23:47:05.278962 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.279390 kubelet[2821]: E1104 23:47:05.279376 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.279562 kubelet[2821]: W1104 23:47:05.279453 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.279562 kubelet[2821]: E1104 23:47:05.279470 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.279997 kubelet[2821]: E1104 23:47:05.279982 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.280154 kubelet[2821]: W1104 23:47:05.280074 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.280154 kubelet[2821]: E1104 23:47:05.280089 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.280581 kubelet[2821]: E1104 23:47:05.280565 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.280711 kubelet[2821]: W1104 23:47:05.280645 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.280711 kubelet[2821]: E1104 23:47:05.280660 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.281006 kubelet[2821]: E1104 23:47:05.280991 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.281162 kubelet[2821]: W1104 23:47:05.281070 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.281162 kubelet[2821]: E1104 23:47:05.281083 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.281439 kubelet[2821]: E1104 23:47:05.281426 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.281490 kubelet[2821]: W1104 23:47:05.281480 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.281571 kubelet[2821]: E1104 23:47:05.281558 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.281958 kubelet[2821]: E1104 23:47:05.281852 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.281958 kubelet[2821]: W1104 23:47:05.281865 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.281958 kubelet[2821]: E1104 23:47:05.281876 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.282262 kubelet[2821]: E1104 23:47:05.282247 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.282390 kubelet[2821]: W1104 23:47:05.282326 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.282390 kubelet[2821]: E1104 23:47:05.282341 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.282638 kubelet[2821]: E1104 23:47:05.282624 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.282796 kubelet[2821]: W1104 23:47:05.282702 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.282796 kubelet[2821]: E1104 23:47:05.282717 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.283033 kubelet[2821]: E1104 23:47:05.283020 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.283151 kubelet[2821]: W1104 23:47:05.283094 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.283151 kubelet[2821]: E1104 23:47:05.283109 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.283454 kubelet[2821]: E1104 23:47:05.283385 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.283454 kubelet[2821]: W1104 23:47:05.283399 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.283454 kubelet[2821]: E1104 23:47:05.283410 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.283740 kubelet[2821]: E1104 23:47:05.283725 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.283855 kubelet[2821]: W1104 23:47:05.283793 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.283855 kubelet[2821]: E1104 23:47:05.283806 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.284156 kubelet[2821]: E1104 23:47:05.284141 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.284299 kubelet[2821]: W1104 23:47:05.284206 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.284299 kubelet[2821]: E1104 23:47:05.284220 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.284462 kubelet[2821]: E1104 23:47:05.284448 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.284531 kubelet[2821]: W1104 23:47:05.284519 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.284598 kubelet[2821]: E1104 23:47:05.284577 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.284940 kubelet[2821]: E1104 23:47:05.284880 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.284940 kubelet[2821]: W1104 23:47:05.284894 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.285094 kubelet[2821]: E1104 23:47:05.285039 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.285324 kubelet[2821]: E1104 23:47:05.285309 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.285467 kubelet[2821]: W1104 23:47:05.285383 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.285467 kubelet[2821]: E1104 23:47:05.285397 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.296834 kubelet[2821]: E1104 23:47:05.296748 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.296834 kubelet[2821]: W1104 23:47:05.296776 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.296834 kubelet[2821]: E1104 23:47:05.296800 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.297309 kubelet[2821]: I1104 23:47:05.297195 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b6136b3a-c7e7-4b68-a7f8-18b611db9e11-registration-dir\") pod \"csi-node-driver-qfh2g\" (UID: \"b6136b3a-c7e7-4b68-a7f8-18b611db9e11\") " pod="calico-system/csi-node-driver-qfh2g" Nov 4 23:47:05.297673 kubelet[2821]: E1104 23:47:05.297612 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.297673 kubelet[2821]: W1104 23:47:05.297628 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.297673 kubelet[2821]: E1104 23:47:05.297641 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.298126 kubelet[2821]: E1104 23:47:05.298083 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.298126 kubelet[2821]: W1104 23:47:05.298097 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.298126 kubelet[2821]: E1104 23:47:05.298108 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.298575 kubelet[2821]: E1104 23:47:05.298531 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.298575 kubelet[2821]: W1104 23:47:05.298546 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.298575 kubelet[2821]: E1104 23:47:05.298558 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.298767 kubelet[2821]: I1104 23:47:05.298711 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6136b3a-c7e7-4b68-a7f8-18b611db9e11-kubelet-dir\") pod \"csi-node-driver-qfh2g\" (UID: \"b6136b3a-c7e7-4b68-a7f8-18b611db9e11\") " pod="calico-system/csi-node-driver-qfh2g" Nov 4 23:47:05.299146 kubelet[2821]: E1104 23:47:05.299100 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.299146 kubelet[2821]: W1104 23:47:05.299119 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.299146 kubelet[2821]: E1104 23:47:05.299131 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.299347 kubelet[2821]: I1104 23:47:05.299306 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b6136b3a-c7e7-4b68-a7f8-18b611db9e11-varrun\") pod \"csi-node-driver-qfh2g\" (UID: \"b6136b3a-c7e7-4b68-a7f8-18b611db9e11\") " pod="calico-system/csi-node-driver-qfh2g" Nov 4 23:47:05.299737 kubelet[2821]: E1104 23:47:05.299689 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.299737 kubelet[2821]: W1104 23:47:05.299704 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.299737 kubelet[2821]: E1104 23:47:05.299717 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.300272 kubelet[2821]: E1104 23:47:05.300224 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.300272 kubelet[2821]: W1104 23:47:05.300239 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.300272 kubelet[2821]: E1104 23:47:05.300255 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.300723 kubelet[2821]: E1104 23:47:05.300684 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.300723 kubelet[2821]: W1104 23:47:05.300698 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.300723 kubelet[2821]: E1104 23:47:05.300708 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.301175 kubelet[2821]: E1104 23:47:05.301133 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.301175 kubelet[2821]: W1104 23:47:05.301147 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.301175 kubelet[2821]: E1104 23:47:05.301159 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.301561 kubelet[2821]: E1104 23:47:05.301524 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.301561 kubelet[2821]: W1104 23:47:05.301536 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.301561 kubelet[2821]: E1104 23:47:05.301546 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.301756 kubelet[2821]: I1104 23:47:05.301710 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b6136b3a-c7e7-4b68-a7f8-18b611db9e11-socket-dir\") pod \"csi-node-driver-qfh2g\" (UID: \"b6136b3a-c7e7-4b68-a7f8-18b611db9e11\") " pod="calico-system/csi-node-driver-qfh2g" Nov 4 23:47:05.302139 kubelet[2821]: E1104 23:47:05.302092 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.302139 kubelet[2821]: W1104 23:47:05.302110 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.302139 kubelet[2821]: E1104 23:47:05.302122 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.302381 kubelet[2821]: I1104 23:47:05.302294 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd88t\" (UniqueName: \"kubernetes.io/projected/b6136b3a-c7e7-4b68-a7f8-18b611db9e11-kube-api-access-fd88t\") pod \"csi-node-driver-qfh2g\" (UID: \"b6136b3a-c7e7-4b68-a7f8-18b611db9e11\") " pod="calico-system/csi-node-driver-qfh2g" Nov 4 23:47:05.302676 kubelet[2821]: E1104 23:47:05.302630 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.302676 kubelet[2821]: W1104 23:47:05.302647 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.302676 kubelet[2821]: E1104 23:47:05.302660 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.303159 kubelet[2821]: E1104 23:47:05.303114 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.303159 kubelet[2821]: W1104 23:47:05.303130 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.303159 kubelet[2821]: E1104 23:47:05.303142 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.303645 kubelet[2821]: E1104 23:47:05.303570 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.303645 kubelet[2821]: W1104 23:47:05.303586 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.303645 kubelet[2821]: E1104 23:47:05.303600 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.304109 kubelet[2821]: E1104 23:47:05.304058 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.304109 kubelet[2821]: W1104 23:47:05.304073 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.304109 kubelet[2821]: E1104 23:47:05.304086 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.308390 containerd[1601]: time="2025-11-04T23:47:05.308329458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-686b5d9d8c-t2n7r,Uid:c8830b46-baab-4841-a126-bd41f25db7d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"e74f0473c4ac678a31cb105eaf351813a20df157f97bd702093b4dfffc17e280\"" Nov 4 23:47:05.309648 kubelet[2821]: E1104 23:47:05.309572 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:05.311160 containerd[1601]: time="2025-11-04T23:47:05.311104423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 23:47:05.379553 kubelet[2821]: E1104 23:47:05.379481 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:05.380258 containerd[1601]: time="2025-11-04T23:47:05.380197119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ftmnr,Uid:3e14ecb1-ff1c-4c93-9ea2-656fda908707,Namespace:calico-system,Attempt:0,}" Nov 4 23:47:05.403827 kubelet[2821]: E1104 23:47:05.403780 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.403827 kubelet[2821]: W1104 23:47:05.403816 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.403978 kubelet[2821]: E1104 23:47:05.403842 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.404369 kubelet[2821]: E1104 23:47:05.404147 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.404369 kubelet[2821]: W1104 23:47:05.404166 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.404369 kubelet[2821]: E1104 23:47:05.404178 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.405999 kubelet[2821]: E1104 23:47:05.404437 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.405999 kubelet[2821]: W1104 23:47:05.404446 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.405999 kubelet[2821]: E1104 23:47:05.404455 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.406087 kubelet[2821]: E1104 23:47:05.405992 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.406087 kubelet[2821]: W1104 23:47:05.406053 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.406087 kubelet[2821]: E1104 23:47:05.406067 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.406527 kubelet[2821]: E1104 23:47:05.406405 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.406527 kubelet[2821]: W1104 23:47:05.406423 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.406527 kubelet[2821]: E1104 23:47:05.406432 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.406762 kubelet[2821]: E1104 23:47:05.406737 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.406762 kubelet[2821]: W1104 23:47:05.406753 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.406825 kubelet[2821]: E1104 23:47:05.406763 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.407165 kubelet[2821]: E1104 23:47:05.407136 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.407165 kubelet[2821]: W1104 23:47:05.407152 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.407165 kubelet[2821]: E1104 23:47:05.407162 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.407804 kubelet[2821]: E1104 23:47:05.407468 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.407804 kubelet[2821]: W1104 23:47:05.407481 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.407804 kubelet[2821]: E1104 23:47:05.407492 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.407961 containerd[1601]: time="2025-11-04T23:47:05.407444224Z" level=info msg="connecting to shim d99a08c66ba847f0a60f0b6a8ab6674ebcd7b384fdf35f4a565343bbbbbf0dde" address="unix:///run/containerd/s/45c562d20f4842057d1369ad68aaac5d6f4e1b504a7e76a6283fffe5816cf8b9" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:47:05.408243 kubelet[2821]: E1104 23:47:05.408210 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.408314 kubelet[2821]: W1104 23:47:05.408270 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.408314 kubelet[2821]: E1104 23:47:05.408283 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.409293 kubelet[2821]: E1104 23:47:05.408654 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.409293 kubelet[2821]: W1104 23:47:05.408696 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.409293 kubelet[2821]: E1104 23:47:05.408709 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.409293 kubelet[2821]: E1104 23:47:05.409148 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.409293 kubelet[2821]: W1104 23:47:05.409157 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.409293 kubelet[2821]: E1104 23:47:05.409167 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.409734 kubelet[2821]: E1104 23:47:05.409705 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.409734 kubelet[2821]: W1104 23:47:05.409722 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.409734 kubelet[2821]: E1104 23:47:05.409732 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.410071 kubelet[2821]: E1104 23:47:05.410051 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.410113 kubelet[2821]: W1104 23:47:05.410064 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.410113 kubelet[2821]: E1104 23:47:05.410100 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.410416 kubelet[2821]: E1104 23:47:05.410396 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.410448 kubelet[2821]: W1104 23:47:05.410439 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.410471 kubelet[2821]: E1104 23:47:05.410452 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.410949 kubelet[2821]: E1104 23:47:05.410884 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.410994 kubelet[2821]: W1104 23:47:05.410900 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.411122 kubelet[2821]: E1104 23:47:05.411082 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.411618 kubelet[2821]: E1104 23:47:05.411540 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.411618 kubelet[2821]: W1104 23:47:05.411568 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.411618 kubelet[2821]: E1104 23:47:05.411598 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.412371 kubelet[2821]: E1104 23:47:05.412358 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.412538 kubelet[2821]: W1104 23:47:05.412444 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.412538 kubelet[2821]: E1104 23:47:05.412466 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.412967 kubelet[2821]: E1104 23:47:05.412877 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.412967 kubelet[2821]: W1104 23:47:05.412890 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.413069 kubelet[2821]: E1104 23:47:05.413040 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.413562 kubelet[2821]: E1104 23:47:05.413524 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.413562 kubelet[2821]: W1104 23:47:05.413537 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.413562 kubelet[2821]: E1104 23:47:05.413548 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.413953 kubelet[2821]: E1104 23:47:05.413897 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.413953 kubelet[2821]: W1104 23:47:05.413930 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.413953 kubelet[2821]: E1104 23:47:05.413940 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.415609 kubelet[2821]: E1104 23:47:05.415595 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.415699 kubelet[2821]: W1104 23:47:05.415660 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.415699 kubelet[2821]: E1104 23:47:05.415685 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.416151 kubelet[2821]: E1104 23:47:05.416113 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.416151 kubelet[2821]: W1104 23:47:05.416127 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.416151 kubelet[2821]: E1104 23:47:05.416139 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.417734 kubelet[2821]: E1104 23:47:05.417559 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.417734 kubelet[2821]: W1104 23:47:05.417577 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.417734 kubelet[2821]: E1104 23:47:05.417591 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.418574 kubelet[2821]: E1104 23:47:05.418525 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.418574 kubelet[2821]: W1104 23:47:05.418542 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.418574 kubelet[2821]: E1104 23:47:05.418556 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.419261 kubelet[2821]: E1104 23:47:05.419203 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.419261 kubelet[2821]: W1104 23:47:05.419221 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.419261 kubelet[2821]: E1104 23:47:05.419234 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.426605 kubelet[2821]: E1104 23:47:05.426572 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:05.426681 kubelet[2821]: W1104 23:47:05.426668 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:05.426757 kubelet[2821]: E1104 23:47:05.426733 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:05.453269 systemd[1]: Started cri-containerd-d99a08c66ba847f0a60f0b6a8ab6674ebcd7b384fdf35f4a565343bbbbbf0dde.scope - libcontainer container d99a08c66ba847f0a60f0b6a8ab6674ebcd7b384fdf35f4a565343bbbbbf0dde. Nov 4 23:47:05.495892 containerd[1601]: time="2025-11-04T23:47:05.495840309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ftmnr,Uid:3e14ecb1-ff1c-4c93-9ea2-656fda908707,Namespace:calico-system,Attempt:0,} returns sandbox id \"d99a08c66ba847f0a60f0b6a8ab6674ebcd7b384fdf35f4a565343bbbbbf0dde\"" Nov 4 23:47:05.496637 kubelet[2821]: E1104 23:47:05.496607 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:06.813999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount615639486.mount: Deactivated successfully. Nov 4 23:47:07.183773 kubelet[2821]: E1104 23:47:07.183685 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qfh2g" podUID="b6136b3a-c7e7-4b68-a7f8-18b611db9e11" Nov 4 23:47:07.587115 containerd[1601]: time="2025-11-04T23:47:07.586981391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:47:07.587971 containerd[1601]: time="2025-11-04T23:47:07.587941315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 4 23:47:07.589316 containerd[1601]: time="2025-11-04T23:47:07.589287797Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:47:07.591595 containerd[1601]: time="2025-11-04T23:47:07.591549794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:47:07.592084 containerd[1601]: time="2025-11-04T23:47:07.592060622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.280899465s" Nov 4 23:47:07.592141 containerd[1601]: time="2025-11-04T23:47:07.592089831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 4 23:47:07.593373 containerd[1601]: time="2025-11-04T23:47:07.593167241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 23:47:07.608124 containerd[1601]: time="2025-11-04T23:47:07.608070992Z" level=info msg="CreateContainer within sandbox \"e74f0473c4ac678a31cb105eaf351813a20df157f97bd702093b4dfffc17e280\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 23:47:07.616561 containerd[1601]: time="2025-11-04T23:47:07.616525731Z" level=info msg="Container d4d9a5847f5e231bebd6c6cd64ae5da03b2c9c1fa8280a49e57c7d47ec49484d: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:47:07.623443 containerd[1601]: time="2025-11-04T23:47:07.623402984Z" level=info msg="CreateContainer within sandbox \"e74f0473c4ac678a31cb105eaf351813a20df157f97bd702093b4dfffc17e280\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d4d9a5847f5e231bebd6c6cd64ae5da03b2c9c1fa8280a49e57c7d47ec49484d\"" Nov 4 23:47:07.623822 containerd[1601]: time="2025-11-04T23:47:07.623804142Z" level=info msg="StartContainer for \"d4d9a5847f5e231bebd6c6cd64ae5da03b2c9c1fa8280a49e57c7d47ec49484d\"" Nov 4 23:47:07.624771 containerd[1601]: time="2025-11-04T23:47:07.624730828Z" level=info msg="connecting to shim d4d9a5847f5e231bebd6c6cd64ae5da03b2c9c1fa8280a49e57c7d47ec49484d" address="unix:///run/containerd/s/11cde72c7a6defd0a8b3c10439dde83b4edf29723ae0f811d3182d076e9f63e5" protocol=ttrpc version=3 Nov 4 23:47:07.652106 systemd[1]: Started cri-containerd-d4d9a5847f5e231bebd6c6cd64ae5da03b2c9c1fa8280a49e57c7d47ec49484d.scope - libcontainer container d4d9a5847f5e231bebd6c6cd64ae5da03b2c9c1fa8280a49e57c7d47ec49484d. Nov 4 23:47:07.707689 containerd[1601]: time="2025-11-04T23:47:07.707627845Z" level=info msg="StartContainer for \"d4d9a5847f5e231bebd6c6cd64ae5da03b2c9c1fa8280a49e57c7d47ec49484d\" returns successfully" Nov 4 23:47:08.237020 kubelet[2821]: E1104 23:47:08.236977 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:08.248755 kubelet[2821]: I1104 23:47:08.248552 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-686b5d9d8c-t2n7r" podStartSLOduration=1.966171472 podStartE2EDuration="4.24852511s" podCreationTimestamp="2025-11-04 23:47:04 +0000 UTC" firstStartedPulling="2025-11-04 23:47:05.310634854 +0000 UTC m=+22.257890113" lastFinishedPulling="2025-11-04 23:47:07.592988492 +0000 UTC m=+24.540243751" observedRunningTime="2025-11-04 23:47:08.248224005 +0000 UTC m=+25.195479264" watchObservedRunningTime="2025-11-04 23:47:08.24852511 +0000 UTC m=+25.195780370" Nov 4 23:47:08.303370 kubelet[2821]: E1104 23:47:08.303310 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.303370 kubelet[2821]: W1104 23:47:08.303344 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.303370 kubelet[2821]: E1104 23:47:08.303374 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.303610 kubelet[2821]: E1104 23:47:08.303578 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.303610 kubelet[2821]: W1104 23:47:08.303586 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.303610 kubelet[2821]: E1104 23:47:08.303595 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.303951 kubelet[2821]: E1104 23:47:08.303932 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.303951 kubelet[2821]: W1104 23:47:08.303944 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.303951 kubelet[2821]: E1104 23:47:08.303955 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.304234 kubelet[2821]: E1104 23:47:08.304218 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.304234 kubelet[2821]: W1104 23:47:08.304230 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.304312 kubelet[2821]: E1104 23:47:08.304239 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.304490 kubelet[2821]: E1104 23:47:08.304451 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.304490 kubelet[2821]: W1104 23:47:08.304470 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.304490 kubelet[2821]: E1104 23:47:08.304483 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.304731 kubelet[2821]: E1104 23:47:08.304698 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.304731 kubelet[2821]: W1104 23:47:08.304709 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.304731 kubelet[2821]: E1104 23:47:08.304720 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.304952 kubelet[2821]: E1104 23:47:08.304931 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.304952 kubelet[2821]: W1104 23:47:08.304943 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.304952 kubelet[2821]: E1104 23:47:08.304953 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.305209 kubelet[2821]: E1104 23:47:08.305175 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.305209 kubelet[2821]: W1104 23:47:08.305185 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.305209 kubelet[2821]: E1104 23:47:08.305193 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.305413 kubelet[2821]: E1104 23:47:08.305396 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.305413 kubelet[2821]: W1104 23:47:08.305408 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.305475 kubelet[2821]: E1104 23:47:08.305417 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.305581 kubelet[2821]: E1104 23:47:08.305562 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.305581 kubelet[2821]: W1104 23:47:08.305575 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.305673 kubelet[2821]: E1104 23:47:08.305587 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.305863 kubelet[2821]: E1104 23:47:08.305844 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.305863 kubelet[2821]: W1104 23:47:08.305860 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.305863 kubelet[2821]: E1104 23:47:08.305872 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.306128 kubelet[2821]: E1104 23:47:08.306111 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.306128 kubelet[2821]: W1104 23:47:08.306125 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.306195 kubelet[2821]: E1104 23:47:08.306134 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.306359 kubelet[2821]: E1104 23:47:08.306345 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.306359 kubelet[2821]: W1104 23:47:08.306356 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.306435 kubelet[2821]: E1104 23:47:08.306365 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.306573 kubelet[2821]: E1104 23:47:08.306549 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.306573 kubelet[2821]: W1104 23:47:08.306565 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.306666 kubelet[2821]: E1104 23:47:08.306583 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.306791 kubelet[2821]: E1104 23:47:08.306775 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.306791 kubelet[2821]: W1104 23:47:08.306785 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.306873 kubelet[2821]: E1104 23:47:08.306795 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.326412 kubelet[2821]: E1104 23:47:08.326350 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.326412 kubelet[2821]: W1104 23:47:08.326374 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.326412 kubelet[2821]: E1104 23:47:08.326400 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.326670 kubelet[2821]: E1104 23:47:08.326643 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.326670 kubelet[2821]: W1104 23:47:08.326651 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.326670 kubelet[2821]: E1104 23:47:08.326659 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.326917 kubelet[2821]: E1104 23:47:08.326880 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.326917 kubelet[2821]: W1104 23:47:08.326895 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.327004 kubelet[2821]: E1104 23:47:08.326928 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.327363 kubelet[2821]: E1104 23:47:08.327317 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.327363 kubelet[2821]: W1104 23:47:08.327353 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.327429 kubelet[2821]: E1104 23:47:08.327380 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.327586 kubelet[2821]: E1104 23:47:08.327558 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.327586 kubelet[2821]: W1104 23:47:08.327569 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.327586 kubelet[2821]: E1104 23:47:08.327577 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.327771 kubelet[2821]: E1104 23:47:08.327752 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.327771 kubelet[2821]: W1104 23:47:08.327761 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.327771 kubelet[2821]: E1104 23:47:08.327769 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.328001 kubelet[2821]: E1104 23:47:08.327982 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.328001 kubelet[2821]: W1104 23:47:08.327993 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.328085 kubelet[2821]: E1104 23:47:08.328003 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.328201 kubelet[2821]: E1104 23:47:08.328170 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.328201 kubelet[2821]: W1104 23:47:08.328179 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.328201 kubelet[2821]: E1104 23:47:08.328186 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.328372 kubelet[2821]: E1104 23:47:08.328354 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.328372 kubelet[2821]: W1104 23:47:08.328363 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.328372 kubelet[2821]: E1104 23:47:08.328373 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.328583 kubelet[2821]: E1104 23:47:08.328556 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.328583 kubelet[2821]: W1104 23:47:08.328567 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.328583 kubelet[2821]: E1104 23:47:08.328574 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.328752 kubelet[2821]: E1104 23:47:08.328734 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.328752 kubelet[2821]: W1104 23:47:08.328745 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.328752 kubelet[2821]: E1104 23:47:08.328753 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.328986 kubelet[2821]: E1104 23:47:08.328966 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.328986 kubelet[2821]: W1104 23:47:08.328978 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.328986 kubelet[2821]: E1104 23:47:08.328988 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.329314 kubelet[2821]: E1104 23:47:08.329290 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.329314 kubelet[2821]: W1104 23:47:08.329307 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.329391 kubelet[2821]: E1104 23:47:08.329321 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.329558 kubelet[2821]: E1104 23:47:08.329539 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.329558 kubelet[2821]: W1104 23:47:08.329551 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.329644 kubelet[2821]: E1104 23:47:08.329563 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.329780 kubelet[2821]: E1104 23:47:08.329762 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.329780 kubelet[2821]: W1104 23:47:08.329774 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.329872 kubelet[2821]: E1104 23:47:08.329785 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.330080 kubelet[2821]: E1104 23:47:08.330056 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.330080 kubelet[2821]: W1104 23:47:08.330071 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.330300 kubelet[2821]: E1104 23:47:08.330088 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.330385 kubelet[2821]: E1104 23:47:08.330364 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.330385 kubelet[2821]: W1104 23:47:08.330378 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.330450 kubelet[2821]: E1104 23:47:08.330389 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:08.330604 kubelet[2821]: E1104 23:47:08.330585 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:08.330604 kubelet[2821]: W1104 23:47:08.330596 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:08.330604 kubelet[2821]: E1104 23:47:08.330605 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.163533 kubelet[2821]: E1104 23:47:09.163469 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qfh2g" podUID="b6136b3a-c7e7-4b68-a7f8-18b611db9e11" Nov 4 23:47:09.238434 kubelet[2821]: I1104 23:47:09.238381 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:47:09.238887 kubelet[2821]: E1104 23:47:09.238781 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:09.313670 kubelet[2821]: E1104 23:47:09.313612 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.313670 kubelet[2821]: W1104 23:47:09.313644 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.313670 kubelet[2821]: E1104 23:47:09.313671 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.313894 kubelet[2821]: E1104 23:47:09.313866 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.313894 kubelet[2821]: W1104 23:47:09.313874 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.313894 kubelet[2821]: E1104 23:47:09.313883 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.314230 kubelet[2821]: E1104 23:47:09.314212 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.314230 kubelet[2821]: W1104 23:47:09.314224 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.314230 kubelet[2821]: E1104 23:47:09.314233 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.314462 kubelet[2821]: E1104 23:47:09.314416 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.314462 kubelet[2821]: W1104 23:47:09.314441 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.314462 kubelet[2821]: E1104 23:47:09.314451 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.314749 kubelet[2821]: E1104 23:47:09.314681 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.314749 kubelet[2821]: W1104 23:47:09.314690 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.314749 kubelet[2821]: E1104 23:47:09.314700 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.314987 kubelet[2821]: E1104 23:47:09.314955 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.314987 kubelet[2821]: W1104 23:47:09.314979 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.314987 kubelet[2821]: E1104 23:47:09.314989 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.315199 kubelet[2821]: E1104 23:47:09.315183 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.315199 kubelet[2821]: W1104 23:47:09.315194 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.315266 kubelet[2821]: E1104 23:47:09.315202 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.315423 kubelet[2821]: E1104 23:47:09.315406 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.315464 kubelet[2821]: W1104 23:47:09.315425 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.315464 kubelet[2821]: E1104 23:47:09.315447 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.315732 kubelet[2821]: E1104 23:47:09.315701 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.315732 kubelet[2821]: W1104 23:47:09.315722 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.315807 kubelet[2821]: E1104 23:47:09.315740 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.315999 kubelet[2821]: E1104 23:47:09.315978 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.316043 kubelet[2821]: W1104 23:47:09.315999 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.316043 kubelet[2821]: E1104 23:47:09.316017 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.316266 kubelet[2821]: E1104 23:47:09.316242 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.316266 kubelet[2821]: W1104 23:47:09.316264 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.316351 kubelet[2821]: E1104 23:47:09.316281 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.316526 kubelet[2821]: E1104 23:47:09.316503 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.316526 kubelet[2821]: W1104 23:47:09.316521 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.316584 kubelet[2821]: E1104 23:47:09.316542 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.316836 kubelet[2821]: E1104 23:47:09.316810 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.316836 kubelet[2821]: W1104 23:47:09.316827 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.316890 kubelet[2821]: E1104 23:47:09.316845 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.317132 kubelet[2821]: E1104 23:47:09.317108 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.317132 kubelet[2821]: W1104 23:47:09.317125 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.317205 kubelet[2821]: E1104 23:47:09.317137 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.317351 kubelet[2821]: E1104 23:47:09.317337 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.317351 kubelet[2821]: W1104 23:47:09.317348 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.317399 kubelet[2821]: E1104 23:47:09.317358 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.333955 kubelet[2821]: E1104 23:47:09.333891 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.334013 kubelet[2821]: W1104 23:47:09.333953 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.334013 kubelet[2821]: E1104 23:47:09.333985 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.334281 kubelet[2821]: E1104 23:47:09.334255 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.334281 kubelet[2821]: W1104 23:47:09.334271 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.334350 kubelet[2821]: E1104 23:47:09.334282 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.334550 kubelet[2821]: E1104 23:47:09.334518 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.334550 kubelet[2821]: W1104 23:47:09.334537 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.334550 kubelet[2821]: E1104 23:47:09.334548 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.334844 kubelet[2821]: E1104 23:47:09.334826 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.334844 kubelet[2821]: W1104 23:47:09.334839 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.334933 kubelet[2821]: E1104 23:47:09.334849 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.335077 kubelet[2821]: E1104 23:47:09.335053 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.335077 kubelet[2821]: W1104 23:47:09.335065 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.335077 kubelet[2821]: E1104 23:47:09.335073 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.335417 kubelet[2821]: E1104 23:47:09.335365 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.335417 kubelet[2821]: W1104 23:47:09.335381 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.335417 kubelet[2821]: E1104 23:47:09.335404 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.335733 kubelet[2821]: E1104 23:47:09.335714 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.335733 kubelet[2821]: W1104 23:47:09.335728 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.335809 kubelet[2821]: E1104 23:47:09.335738 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.336032 kubelet[2821]: E1104 23:47:09.336010 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.336032 kubelet[2821]: W1104 23:47:09.336025 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.336124 kubelet[2821]: E1104 23:47:09.336038 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.336259 kubelet[2821]: E1104 23:47:09.336239 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.336259 kubelet[2821]: W1104 23:47:09.336251 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.336259 kubelet[2821]: E1104 23:47:09.336260 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.336502 kubelet[2821]: E1104 23:47:09.336479 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.336542 kubelet[2821]: W1104 23:47:09.336500 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.336542 kubelet[2821]: E1104 23:47:09.336521 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.336823 kubelet[2821]: E1104 23:47:09.336800 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.336875 kubelet[2821]: W1104 23:47:09.336821 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.336875 kubelet[2821]: E1104 23:47:09.336840 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.337148 kubelet[2821]: E1104 23:47:09.337121 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.337148 kubelet[2821]: W1104 23:47:09.337134 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.337148 kubelet[2821]: E1104 23:47:09.337144 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.337381 kubelet[2821]: E1104 23:47:09.337360 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.337381 kubelet[2821]: W1104 23:47:09.337373 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.337381 kubelet[2821]: E1104 23:47:09.337384 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.337623 kubelet[2821]: E1104 23:47:09.337603 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.337623 kubelet[2821]: W1104 23:47:09.337614 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.337623 kubelet[2821]: E1104 23:47:09.337624 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.337848 kubelet[2821]: E1104 23:47:09.337825 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.337926 kubelet[2821]: W1104 23:47:09.337847 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.337926 kubelet[2821]: E1104 23:47:09.337867 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.338371 kubelet[2821]: E1104 23:47:09.338339 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.338371 kubelet[2821]: W1104 23:47:09.338360 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.338446 kubelet[2821]: E1104 23:47:09.338395 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.339241 kubelet[2821]: E1104 23:47:09.339216 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.339241 kubelet[2821]: W1104 23:47:09.339231 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.339241 kubelet[2821]: E1104 23:47:09.339243 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:09.339475 kubelet[2821]: E1104 23:47:09.339446 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:47:09.339475 kubelet[2821]: W1104 23:47:09.339470 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:47:09.339559 kubelet[2821]: E1104 23:47:09.339481 2821 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:47:10.758133 containerd[1601]: time="2025-11-04T23:47:10.758060698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:47:10.804347 containerd[1601]: time="2025-11-04T23:47:10.804297961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 4 23:47:10.818090 containerd[1601]: time="2025-11-04T23:47:10.818022799Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:47:10.828766 containerd[1601]: time="2025-11-04T23:47:10.828700991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:47:10.829503 containerd[1601]: time="2025-11-04T23:47:10.829449182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 3.236247101s" Nov 4 23:47:10.829503 containerd[1601]: time="2025-11-04T23:47:10.829489594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 4 23:47:10.838718 containerd[1601]: time="2025-11-04T23:47:10.838664990Z" level=info msg="CreateContainer within sandbox \"d99a08c66ba847f0a60f0b6a8ab6674ebcd7b384fdf35f4a565343bbbbbf0dde\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 23:47:10.849936 containerd[1601]: time="2025-11-04T23:47:10.849858527Z" level=info msg="Container d80d304d34548d93338438ccf435d76dc23f80b08153db55f05df0a0603fdcd8: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:47:10.859239 containerd[1601]: time="2025-11-04T23:47:10.859182751Z" level=info msg="CreateContainer within sandbox \"d99a08c66ba847f0a60f0b6a8ab6674ebcd7b384fdf35f4a565343bbbbbf0dde\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d80d304d34548d93338438ccf435d76dc23f80b08153db55f05df0a0603fdcd8\"" Nov 4 23:47:10.859776 containerd[1601]: time="2025-11-04T23:47:10.859733366Z" level=info msg="StartContainer for \"d80d304d34548d93338438ccf435d76dc23f80b08153db55f05df0a0603fdcd8\"" Nov 4 23:47:10.861238 containerd[1601]: time="2025-11-04T23:47:10.861209168Z" level=info msg="connecting to shim d80d304d34548d93338438ccf435d76dc23f80b08153db55f05df0a0603fdcd8" address="unix:///run/containerd/s/45c562d20f4842057d1369ad68aaac5d6f4e1b504a7e76a6283fffe5816cf8b9" protocol=ttrpc version=3 Nov 4 23:47:10.886106 systemd[1]: Started cri-containerd-d80d304d34548d93338438ccf435d76dc23f80b08153db55f05df0a0603fdcd8.scope - libcontainer container d80d304d34548d93338438ccf435d76dc23f80b08153db55f05df0a0603fdcd8. Nov 4 23:47:10.933436 containerd[1601]: time="2025-11-04T23:47:10.933380675Z" level=info msg="StartContainer for \"d80d304d34548d93338438ccf435d76dc23f80b08153db55f05df0a0603fdcd8\" returns successfully" Nov 4 23:47:10.946386 systemd[1]: cri-containerd-d80d304d34548d93338438ccf435d76dc23f80b08153db55f05df0a0603fdcd8.scope: Deactivated successfully. Nov 4 23:47:10.949622 containerd[1601]: time="2025-11-04T23:47:10.949562634Z" level=info msg="received exit event container_id:\"d80d304d34548d93338438ccf435d76dc23f80b08153db55f05df0a0603fdcd8\" id:\"d80d304d34548d93338438ccf435d76dc23f80b08153db55f05df0a0603fdcd8\" pid:3548 exited_at:{seconds:1762300030 nanos:949109164}" Nov 4 23:47:10.949704 containerd[1601]: time="2025-11-04T23:47:10.949673106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d80d304d34548d93338438ccf435d76dc23f80b08153db55f05df0a0603fdcd8\" id:\"d80d304d34548d93338438ccf435d76dc23f80b08153db55f05df0a0603fdcd8\" pid:3548 exited_at:{seconds:1762300030 nanos:949109164}" Nov 4 23:47:10.979838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d80d304d34548d93338438ccf435d76dc23f80b08153db55f05df0a0603fdcd8-rootfs.mount: Deactivated successfully. Nov 4 23:47:11.163781 kubelet[2821]: E1104 23:47:11.163679 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qfh2g" podUID="b6136b3a-c7e7-4b68-a7f8-18b611db9e11" Nov 4 23:47:11.245654 kubelet[2821]: E1104 23:47:11.245603 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:12.249670 kubelet[2821]: E1104 23:47:12.249611 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:12.250434 containerd[1601]: time="2025-11-04T23:47:12.250394916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 23:47:13.164112 kubelet[2821]: E1104 23:47:13.163840 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qfh2g" podUID="b6136b3a-c7e7-4b68-a7f8-18b611db9e11" Nov 4 23:47:15.163298 kubelet[2821]: E1104 23:47:15.163240 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qfh2g" podUID="b6136b3a-c7e7-4b68-a7f8-18b611db9e11" Nov 4 23:47:16.952660 containerd[1601]: time="2025-11-04T23:47:16.952532470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:47:16.953640 containerd[1601]: time="2025-11-04T23:47:16.953594591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 4 23:47:16.954966 containerd[1601]: time="2025-11-04T23:47:16.954932100Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:47:16.957705 containerd[1601]: time="2025-11-04T23:47:16.957652131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:47:16.958428 containerd[1601]: time="2025-11-04T23:47:16.958323091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.707886631s" Nov 4 23:47:16.958428 containerd[1601]: time="2025-11-04T23:47:16.958365595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 4 23:47:16.963432 containerd[1601]: time="2025-11-04T23:47:16.963390827Z" level=info msg="CreateContainer within sandbox \"d99a08c66ba847f0a60f0b6a8ab6674ebcd7b384fdf35f4a565343bbbbbf0dde\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 23:47:16.975278 containerd[1601]: time="2025-11-04T23:47:16.975195114Z" level=info msg="Container 9670fd8bf0f92562f43f143b8d97fe24c07b623add18aedaf18296fe077e5994: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:47:16.986470 containerd[1601]: time="2025-11-04T23:47:16.986384842Z" level=info msg="CreateContainer within sandbox \"d99a08c66ba847f0a60f0b6a8ab6674ebcd7b384fdf35f4a565343bbbbbf0dde\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9670fd8bf0f92562f43f143b8d97fe24c07b623add18aedaf18296fe077e5994\"" Nov 4 23:47:16.987286 containerd[1601]: time="2025-11-04T23:47:16.987236703Z" level=info msg="StartContainer for \"9670fd8bf0f92562f43f143b8d97fe24c07b623add18aedaf18296fe077e5994\"" Nov 4 23:47:16.989289 containerd[1601]: time="2025-11-04T23:47:16.989243569Z" level=info msg="connecting to shim 9670fd8bf0f92562f43f143b8d97fe24c07b623add18aedaf18296fe077e5994" address="unix:///run/containerd/s/45c562d20f4842057d1369ad68aaac5d6f4e1b504a7e76a6283fffe5816cf8b9" protocol=ttrpc version=3 Nov 4 23:47:17.020246 systemd[1]: Started cri-containerd-9670fd8bf0f92562f43f143b8d97fe24c07b623add18aedaf18296fe077e5994.scope - libcontainer container 9670fd8bf0f92562f43f143b8d97fe24c07b623add18aedaf18296fe077e5994. Nov 4 23:47:17.080379 containerd[1601]: time="2025-11-04T23:47:17.080252726Z" level=info msg="StartContainer for \"9670fd8bf0f92562f43f143b8d97fe24c07b623add18aedaf18296fe077e5994\" returns successfully" Nov 4 23:47:17.167259 kubelet[2821]: E1104 23:47:17.167175 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qfh2g" podUID="b6136b3a-c7e7-4b68-a7f8-18b611db9e11" Nov 4 23:47:17.261639 kubelet[2821]: E1104 23:47:17.261491 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:18.263395 kubelet[2821]: E1104 23:47:18.263332 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:18.873709 systemd[1]: cri-containerd-9670fd8bf0f92562f43f143b8d97fe24c07b623add18aedaf18296fe077e5994.scope: Deactivated successfully. Nov 4 23:47:18.874152 systemd[1]: cri-containerd-9670fd8bf0f92562f43f143b8d97fe24c07b623add18aedaf18296fe077e5994.scope: Consumed 723ms CPU time, 176M memory peak, 4.6M read from disk, 171.3M written to disk. Nov 4 23:47:18.876925 containerd[1601]: time="2025-11-04T23:47:18.876027768Z" level=info msg="received exit event container_id:\"9670fd8bf0f92562f43f143b8d97fe24c07b623add18aedaf18296fe077e5994\" id:\"9670fd8bf0f92562f43f143b8d97fe24c07b623add18aedaf18296fe077e5994\" pid:3608 exited_at:{seconds:1762300038 nanos:875793392}" Nov 4 23:47:18.876925 containerd[1601]: time="2025-11-04T23:47:18.876245703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9670fd8bf0f92562f43f143b8d97fe24c07b623add18aedaf18296fe077e5994\" id:\"9670fd8bf0f92562f43f143b8d97fe24c07b623add18aedaf18296fe077e5994\" pid:3608 exited_at:{seconds:1762300038 nanos:875793392}" Nov 4 23:47:18.905165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9670fd8bf0f92562f43f143b8d97fe24c07b623add18aedaf18296fe077e5994-rootfs.mount: Deactivated successfully. Nov 4 23:47:19.004779 kubelet[2821]: I1104 23:47:19.004714 2821 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 23:47:19.170352 systemd[1]: Created slice kubepods-besteffort-podb6136b3a_c7e7_4b68_a7f8_18b611db9e11.slice - libcontainer container kubepods-besteffort-podb6136b3a_c7e7_4b68_a7f8_18b611db9e11.slice. Nov 4 23:47:19.173535 containerd[1601]: time="2025-11-04T23:47:19.173485356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qfh2g,Uid:b6136b3a-c7e7-4b68-a7f8-18b611db9e11,Namespace:calico-system,Attempt:0,}" Nov 4 23:47:20.147386 systemd[1]: Created slice kubepods-besteffort-pod22717a25_e0bd_4f8f_934c_4d9e328d23a6.slice - libcontainer container kubepods-besteffort-pod22717a25_e0bd_4f8f_934c_4d9e328d23a6.slice. Nov 4 23:47:20.161785 systemd[1]: Created slice kubepods-burstable-podc4a097ba_de0d_4e75_973d_ce0fa1163477.slice - libcontainer container kubepods-burstable-podc4a097ba_de0d_4e75_973d_ce0fa1163477.slice. Nov 4 23:47:20.177775 systemd[1]: Created slice kubepods-besteffort-podf1677806_4e2b_4950_9276_2412224d7bb8.slice - libcontainer container kubepods-besteffort-podf1677806_4e2b_4950_9276_2412224d7bb8.slice. Nov 4 23:47:20.185285 systemd[1]: Created slice kubepods-besteffort-pod7e6b05bc_c5cf_42a7_8a57_876b8ddab4ff.slice - libcontainer container kubepods-besteffort-pod7e6b05bc_c5cf_42a7_8a57_876b8ddab4ff.slice. Nov 4 23:47:20.193888 systemd[1]: Created slice kubepods-besteffort-podf48ec58e_62fc_4c79_8936_16e0e5b98045.slice - libcontainer container kubepods-besteffort-podf48ec58e_62fc_4c79_8936_16e0e5b98045.slice. Nov 4 23:47:20.202326 systemd[1]: Created slice kubepods-besteffort-pod62847ab3_7c8e_4cdf_a2a4_d5f59a1f5dd8.slice - libcontainer container kubepods-besteffort-pod62847ab3_7c8e_4cdf_a2a4_d5f59a1f5dd8.slice. Nov 4 23:47:20.211382 systemd[1]: Created slice kubepods-burstable-pod996cd107_7b4a_4765_be02_ba532f9cecae.slice - libcontainer container kubepods-burstable-pod996cd107_7b4a_4765_be02_ba532f9cecae.slice. Nov 4 23:47:20.211793 kubelet[2821]: I1104 23:47:20.211769 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsgzr\" (UniqueName: \"kubernetes.io/projected/f1677806-4e2b-4950-9276-2412224d7bb8-kube-api-access-jsgzr\") pod \"goldmane-666569f655-dt8tc\" (UID: \"f1677806-4e2b-4950-9276-2412224d7bb8\") " pod="calico-system/goldmane-666569f655-dt8tc" Nov 4 23:47:20.212721 kubelet[2821]: I1104 23:47:20.211802 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1677806-4e2b-4950-9276-2412224d7bb8-config\") pod \"goldmane-666569f655-dt8tc\" (UID: \"f1677806-4e2b-4950-9276-2412224d7bb8\") " pod="calico-system/goldmane-666569f655-dt8tc" Nov 4 23:47:20.212721 kubelet[2821]: I1104 23:47:20.211819 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8-whisker-ca-bundle\") pod \"whisker-54cff4559b-g4czq\" (UID: \"62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8\") " pod="calico-system/whisker-54cff4559b-g4czq" Nov 4 23:47:20.212721 kubelet[2821]: I1104 23:47:20.211850 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62pfm\" (UniqueName: \"kubernetes.io/projected/f48ec58e-62fc-4c79-8936-16e0e5b98045-kube-api-access-62pfm\") pod \"calico-apiserver-6d8f987886-b4d7r\" (UID: \"f48ec58e-62fc-4c79-8936-16e0e5b98045\") " pod="calico-apiserver/calico-apiserver-6d8f987886-b4d7r" Nov 4 23:47:20.212721 kubelet[2821]: I1104 23:47:20.211869 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/22717a25-e0bd-4f8f-934c-4d9e328d23a6-calico-apiserver-certs\") pod \"calico-apiserver-6d8f987886-89zn9\" (UID: \"22717a25-e0bd-4f8f-934c-4d9e328d23a6\") " pod="calico-apiserver/calico-apiserver-6d8f987886-89zn9" Nov 4 23:47:20.212721 kubelet[2821]: I1104 23:47:20.212133 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c35617ca-16ee-4ea4-b266-2395bc382e38-tigera-ca-bundle\") pod \"calico-kube-controllers-7bfb77c96-hmxm7\" (UID: \"c35617ca-16ee-4ea4-b266-2395bc382e38\") " pod="calico-system/calico-kube-controllers-7bfb77c96-hmxm7" Nov 4 23:47:20.212866 kubelet[2821]: I1104 23:47:20.212159 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/996cd107-7b4a-4765-be02-ba532f9cecae-config-volume\") pod \"coredns-674b8bbfcf-jfsqb\" (UID: \"996cd107-7b4a-4765-be02-ba532f9cecae\") " pod="kube-system/coredns-674b8bbfcf-jfsqb" Nov 4 23:47:20.212866 kubelet[2821]: I1104 23:47:20.212174 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvcrv\" (UniqueName: \"kubernetes.io/projected/996cd107-7b4a-4765-be02-ba532f9cecae-kube-api-access-xvcrv\") pod \"coredns-674b8bbfcf-jfsqb\" (UID: \"996cd107-7b4a-4765-be02-ba532f9cecae\") " pod="kube-system/coredns-674b8bbfcf-jfsqb" Nov 4 23:47:20.212866 kubelet[2821]: I1104 23:47:20.212189 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff-calico-apiserver-certs\") pod \"calico-apiserver-5cb8dcb848-zz8td\" (UID: \"7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff\") " pod="calico-apiserver/calico-apiserver-5cb8dcb848-zz8td" Nov 4 23:47:20.212866 kubelet[2821]: I1104 23:47:20.212206 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f48ec58e-62fc-4c79-8936-16e0e5b98045-calico-apiserver-certs\") pod \"calico-apiserver-6d8f987886-b4d7r\" (UID: \"f48ec58e-62fc-4c79-8936-16e0e5b98045\") " pod="calico-apiserver/calico-apiserver-6d8f987886-b4d7r" Nov 4 23:47:20.212866 kubelet[2821]: I1104 23:47:20.212222 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1677806-4e2b-4950-9276-2412224d7bb8-goldmane-ca-bundle\") pod \"goldmane-666569f655-dt8tc\" (UID: \"f1677806-4e2b-4950-9276-2412224d7bb8\") " pod="calico-system/goldmane-666569f655-dt8tc" Nov 4 23:47:20.213175 kubelet[2821]: I1104 23:47:20.212236 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f1677806-4e2b-4950-9276-2412224d7bb8-goldmane-key-pair\") pod \"goldmane-666569f655-dt8tc\" (UID: \"f1677806-4e2b-4950-9276-2412224d7bb8\") " pod="calico-system/goldmane-666569f655-dt8tc" Nov 4 23:47:20.213175 kubelet[2821]: I1104 23:47:20.212251 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frf4z\" (UniqueName: \"kubernetes.io/projected/c35617ca-16ee-4ea4-b266-2395bc382e38-kube-api-access-frf4z\") pod \"calico-kube-controllers-7bfb77c96-hmxm7\" (UID: \"c35617ca-16ee-4ea4-b266-2395bc382e38\") " pod="calico-system/calico-kube-controllers-7bfb77c96-hmxm7" Nov 4 23:47:20.213175 kubelet[2821]: I1104 23:47:20.212287 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4a097ba-de0d-4e75-973d-ce0fa1163477-config-volume\") pod \"coredns-674b8bbfcf-vdw45\" (UID: \"c4a097ba-de0d-4e75-973d-ce0fa1163477\") " pod="kube-system/coredns-674b8bbfcf-vdw45" Nov 4 23:47:20.213175 kubelet[2821]: I1104 23:47:20.212311 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl7gm\" (UniqueName: \"kubernetes.io/projected/22717a25-e0bd-4f8f-934c-4d9e328d23a6-kube-api-access-tl7gm\") pod \"calico-apiserver-6d8f987886-89zn9\" (UID: \"22717a25-e0bd-4f8f-934c-4d9e328d23a6\") " pod="calico-apiserver/calico-apiserver-6d8f987886-89zn9" Nov 4 23:47:20.213175 kubelet[2821]: I1104 23:47:20.212327 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thtxk\" (UniqueName: \"kubernetes.io/projected/62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8-kube-api-access-thtxk\") pod \"whisker-54cff4559b-g4czq\" (UID: \"62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8\") " pod="calico-system/whisker-54cff4559b-g4czq" Nov 4 23:47:20.213295 kubelet[2821]: I1104 23:47:20.212343 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl2hl\" (UniqueName: \"kubernetes.io/projected/7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff-kube-api-access-jl2hl\") pod \"calico-apiserver-5cb8dcb848-zz8td\" (UID: \"7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff\") " pod="calico-apiserver/calico-apiserver-5cb8dcb848-zz8td" Nov 4 23:47:20.213295 kubelet[2821]: I1104 23:47:20.212359 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzrt\" (UniqueName: \"kubernetes.io/projected/c4a097ba-de0d-4e75-973d-ce0fa1163477-kube-api-access-jmzrt\") pod \"coredns-674b8bbfcf-vdw45\" (UID: \"c4a097ba-de0d-4e75-973d-ce0fa1163477\") " pod="kube-system/coredns-674b8bbfcf-vdw45" Nov 4 23:47:20.213295 kubelet[2821]: I1104 23:47:20.212372 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8-whisker-backend-key-pair\") pod \"whisker-54cff4559b-g4czq\" (UID: \"62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8\") " pod="calico-system/whisker-54cff4559b-g4czq" Nov 4 23:47:20.216805 systemd[1]: Created slice kubepods-besteffort-podc35617ca_16ee_4ea4_b266_2395bc382e38.slice - libcontainer container kubepods-besteffort-podc35617ca_16ee_4ea4_b266_2395bc382e38.slice. Nov 4 23:47:20.273503 kubelet[2821]: E1104 23:47:20.273447 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:20.274870 containerd[1601]: time="2025-11-04T23:47:20.274779809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 23:47:20.285187 containerd[1601]: time="2025-11-04T23:47:20.285124056Z" level=error msg="Failed to destroy network for sandbox \"4d3c4d5971fb50317c301474d9781d3d99f9e9cc3c9a55f0a2d02194d54f8ae1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.287286 systemd[1]: run-netns-cni\x2d845a9eac\x2ddccb\x2d7989\x2db409\x2d2deb2a09e688.mount: Deactivated successfully. Nov 4 23:47:20.420526 containerd[1601]: time="2025-11-04T23:47:20.420331114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qfh2g,Uid:b6136b3a-c7e7-4b68-a7f8-18b611db9e11,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d3c4d5971fb50317c301474d9781d3d99f9e9cc3c9a55f0a2d02194d54f8ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.420745 kubelet[2821]: E1104 23:47:20.420578 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d3c4d5971fb50317c301474d9781d3d99f9e9cc3c9a55f0a2d02194d54f8ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.420745 kubelet[2821]: E1104 23:47:20.420650 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d3c4d5971fb50317c301474d9781d3d99f9e9cc3c9a55f0a2d02194d54f8ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qfh2g" Nov 4 23:47:20.420745 kubelet[2821]: E1104 23:47:20.420675 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d3c4d5971fb50317c301474d9781d3d99f9e9cc3c9a55f0a2d02194d54f8ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qfh2g" Nov 4 23:47:20.420971 kubelet[2821]: E1104 23:47:20.420724 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qfh2g_calico-system(b6136b3a-c7e7-4b68-a7f8-18b611db9e11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qfh2g_calico-system(b6136b3a-c7e7-4b68-a7f8-18b611db9e11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d3c4d5971fb50317c301474d9781d3d99f9e9cc3c9a55f0a2d02194d54f8ae1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qfh2g" podUID="b6136b3a-c7e7-4b68-a7f8-18b611db9e11" Nov 4 23:47:20.455939 containerd[1601]: time="2025-11-04T23:47:20.455840631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8f987886-89zn9,Uid:22717a25-e0bd-4f8f-934c-4d9e328d23a6,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:47:20.477454 kubelet[2821]: E1104 23:47:20.477381 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:20.480173 containerd[1601]: time="2025-11-04T23:47:20.480097672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vdw45,Uid:c4a097ba-de0d-4e75-973d-ce0fa1163477,Namespace:kube-system,Attempt:0,}" Nov 4 23:47:20.485196 containerd[1601]: time="2025-11-04T23:47:20.485133861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dt8tc,Uid:f1677806-4e2b-4950-9276-2412224d7bb8,Namespace:calico-system,Attempt:0,}" Nov 4 23:47:20.492543 containerd[1601]: time="2025-11-04T23:47:20.492288026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb8dcb848-zz8td,Uid:7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:47:20.499597 containerd[1601]: time="2025-11-04T23:47:20.499530237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8f987886-b4d7r,Uid:f48ec58e-62fc-4c79-8936-16e0e5b98045,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:47:20.509122 containerd[1601]: time="2025-11-04T23:47:20.509051326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54cff4559b-g4czq,Uid:62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8,Namespace:calico-system,Attempt:0,}" Nov 4 23:47:20.515853 kubelet[2821]: E1104 23:47:20.515799 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:20.518262 containerd[1601]: time="2025-11-04T23:47:20.518218008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jfsqb,Uid:996cd107-7b4a-4765-be02-ba532f9cecae,Namespace:kube-system,Attempt:0,}" Nov 4 23:47:20.521100 containerd[1601]: time="2025-11-04T23:47:20.521053625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfb77c96-hmxm7,Uid:c35617ca-16ee-4ea4-b266-2395bc382e38,Namespace:calico-system,Attempt:0,}" Nov 4 23:47:20.590087 containerd[1601]: time="2025-11-04T23:47:20.590011235Z" level=error msg="Failed to destroy network for sandbox \"0fbd647ad24d9b05888bf1755d5d9e9cb1a1f03683906399a6e86d4b6647f077\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.620133 containerd[1601]: time="2025-11-04T23:47:20.620076622Z" level=error msg="Failed to destroy network for sandbox \"31fe1d7e5ec8986574fc46044dbcd79ace84b65ccef2802479a0821ffda342e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.656999 containerd[1601]: time="2025-11-04T23:47:20.628102303Z" level=error msg="Failed to destroy network for sandbox \"9b70d92b5c62862f96ff941ec6eea90fc71d535a60c3d755af0c7834457d46e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.657160 containerd[1601]: time="2025-11-04T23:47:20.636386258Z" level=error msg="Failed to destroy network for sandbox \"c40c75a1f6af5d6229048cce012be484fd6d7b5071420d0aad4770e75093a475\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.657202 containerd[1601]: time="2025-11-04T23:47:20.656094371Z" level=error msg="Failed to destroy network for sandbox \"44c90e254bda5d3477b9f99410b8a841050a30a9230db52bfbee417d2ef86753\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.657512 containerd[1601]: time="2025-11-04T23:47:20.636472940Z" level=error msg="Failed to destroy network for sandbox \"9f8eb7eadb5691c39552f016c4635c186365730528e5c449e40e55fc65d76e86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.657989 containerd[1601]: time="2025-11-04T23:47:20.656053439Z" level=error msg="Failed to destroy network for sandbox \"0dfeeef7dea28d89ebf8b20b306a5b60797d1b317186900ad84ea450327e23c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.698290 containerd[1601]: time="2025-11-04T23:47:20.698148991Z" level=error msg="Failed to destroy network for sandbox \"e3b228fa7d2481c9489ead88fa87493926850900ed60aac3b6f4055f9c8ae8ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.704658 containerd[1601]: time="2025-11-04T23:47:20.704609897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb8dcb848-zz8td,Uid:7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fbd647ad24d9b05888bf1755d5d9e9cb1a1f03683906399a6e86d4b6647f077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.705026 kubelet[2821]: E1104 23:47:20.704963 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fbd647ad24d9b05888bf1755d5d9e9cb1a1f03683906399a6e86d4b6647f077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.706223 kubelet[2821]: E1104 23:47:20.706183 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fbd647ad24d9b05888bf1755d5d9e9cb1a1f03683906399a6e86d4b6647f077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cb8dcb848-zz8td" Nov 4 23:47:20.706305 kubelet[2821]: E1104 23:47:20.706227 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fbd647ad24d9b05888bf1755d5d9e9cb1a1f03683906399a6e86d4b6647f077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cb8dcb848-zz8td" Nov 4 23:47:20.706335 kubelet[2821]: E1104 23:47:20.706305 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cb8dcb848-zz8td_calico-apiserver(7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cb8dcb848-zz8td_calico-apiserver(7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fbd647ad24d9b05888bf1755d5d9e9cb1a1f03683906399a6e86d4b6647f077\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cb8dcb848-zz8td" podUID="7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff" Nov 4 23:47:20.754515 containerd[1601]: time="2025-11-04T23:47:20.754406944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8f987886-89zn9,Uid:22717a25-e0bd-4f8f-934c-4d9e328d23a6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31fe1d7e5ec8986574fc46044dbcd79ace84b65ccef2802479a0821ffda342e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.755231 kubelet[2821]: E1104 23:47:20.754839 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31fe1d7e5ec8986574fc46044dbcd79ace84b65ccef2802479a0821ffda342e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.755231 kubelet[2821]: E1104 23:47:20.754949 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31fe1d7e5ec8986574fc46044dbcd79ace84b65ccef2802479a0821ffda342e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d8f987886-89zn9" Nov 4 23:47:20.755231 kubelet[2821]: E1104 23:47:20.754978 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31fe1d7e5ec8986574fc46044dbcd79ace84b65ccef2802479a0821ffda342e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d8f987886-89zn9" Nov 4 23:47:20.755698 kubelet[2821]: E1104 23:47:20.755056 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d8f987886-89zn9_calico-apiserver(22717a25-e0bd-4f8f-934c-4d9e328d23a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d8f987886-89zn9_calico-apiserver(22717a25-e0bd-4f8f-934c-4d9e328d23a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31fe1d7e5ec8986574fc46044dbcd79ace84b65ccef2802479a0821ffda342e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d8f987886-89zn9" podUID="22717a25-e0bd-4f8f-934c-4d9e328d23a6" Nov 4 23:47:20.757247 containerd[1601]: time="2025-11-04T23:47:20.757191258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dt8tc,Uid:f1677806-4e2b-4950-9276-2412224d7bb8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b70d92b5c62862f96ff941ec6eea90fc71d535a60c3d755af0c7834457d46e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.757446 kubelet[2821]: E1104 23:47:20.757410 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b70d92b5c62862f96ff941ec6eea90fc71d535a60c3d755af0c7834457d46e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.757495 kubelet[2821]: E1104 23:47:20.757454 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b70d92b5c62862f96ff941ec6eea90fc71d535a60c3d755af0c7834457d46e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dt8tc" Nov 4 23:47:20.757495 kubelet[2821]: E1104 23:47:20.757477 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b70d92b5c62862f96ff941ec6eea90fc71d535a60c3d755af0c7834457d46e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dt8tc" Nov 4 23:47:20.757602 kubelet[2821]: E1104 23:47:20.757543 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-dt8tc_calico-system(f1677806-4e2b-4950-9276-2412224d7bb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-dt8tc_calico-system(f1677806-4e2b-4950-9276-2412224d7bb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b70d92b5c62862f96ff941ec6eea90fc71d535a60c3d755af0c7834457d46e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dt8tc" podUID="f1677806-4e2b-4950-9276-2412224d7bb8" Nov 4 23:47:20.758413 containerd[1601]: time="2025-11-04T23:47:20.758378111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8f987886-b4d7r,Uid:f48ec58e-62fc-4c79-8936-16e0e5b98045,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f8eb7eadb5691c39552f016c4635c186365730528e5c449e40e55fc65d76e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.758587 kubelet[2821]: E1104 23:47:20.758539 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f8eb7eadb5691c39552f016c4635c186365730528e5c449e40e55fc65d76e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.758648 kubelet[2821]: E1104 23:47:20.758579 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f8eb7eadb5691c39552f016c4635c186365730528e5c449e40e55fc65d76e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d8f987886-b4d7r" Nov 4 23:47:20.758682 kubelet[2821]: E1104 23:47:20.758639 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f8eb7eadb5691c39552f016c4635c186365730528e5c449e40e55fc65d76e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d8f987886-b4d7r" Nov 4 23:47:20.758727 kubelet[2821]: E1104 23:47:20.758683 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d8f987886-b4d7r_calico-apiserver(f48ec58e-62fc-4c79-8936-16e0e5b98045)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d8f987886-b4d7r_calico-apiserver(f48ec58e-62fc-4c79-8936-16e0e5b98045)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f8eb7eadb5691c39552f016c4635c186365730528e5c449e40e55fc65d76e86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d8f987886-b4d7r" podUID="f48ec58e-62fc-4c79-8936-16e0e5b98045" Nov 4 23:47:20.765873 containerd[1601]: time="2025-11-04T23:47:20.765810511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54cff4559b-g4czq,Uid:62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44c90e254bda5d3477b9f99410b8a841050a30a9230db52bfbee417d2ef86753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.766158 kubelet[2821]: E1104 23:47:20.766089 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44c90e254bda5d3477b9f99410b8a841050a30a9230db52bfbee417d2ef86753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.766225 kubelet[2821]: E1104 23:47:20.766198 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44c90e254bda5d3477b9f99410b8a841050a30a9230db52bfbee417d2ef86753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54cff4559b-g4czq" Nov 4 23:47:20.766282 kubelet[2821]: E1104 23:47:20.766231 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44c90e254bda5d3477b9f99410b8a841050a30a9230db52bfbee417d2ef86753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54cff4559b-g4czq" Nov 4 23:47:20.766345 kubelet[2821]: E1104 23:47:20.766300 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54cff4559b-g4czq_calico-system(62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54cff4559b-g4czq_calico-system(62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44c90e254bda5d3477b9f99410b8a841050a30a9230db52bfbee417d2ef86753\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54cff4559b-g4czq" podUID="62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8" Nov 4 23:47:20.768427 containerd[1601]: time="2025-11-04T23:47:20.768372642Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vdw45,Uid:c4a097ba-de0d-4e75-973d-ce0fa1163477,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40c75a1f6af5d6229048cce012be484fd6d7b5071420d0aad4770e75093a475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.768589 kubelet[2821]: E1104 23:47:20.768549 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40c75a1f6af5d6229048cce012be484fd6d7b5071420d0aad4770e75093a475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.768668 kubelet[2821]: E1104 23:47:20.768607 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40c75a1f6af5d6229048cce012be484fd6d7b5071420d0aad4770e75093a475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vdw45" Nov 4 23:47:20.768668 kubelet[2821]: E1104 23:47:20.768631 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40c75a1f6af5d6229048cce012be484fd6d7b5071420d0aad4770e75093a475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vdw45" Nov 4 23:47:20.768746 kubelet[2821]: E1104 23:47:20.768694 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vdw45_kube-system(c4a097ba-de0d-4e75-973d-ce0fa1163477)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vdw45_kube-system(c4a097ba-de0d-4e75-973d-ce0fa1163477)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c40c75a1f6af5d6229048cce012be484fd6d7b5071420d0aad4770e75093a475\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vdw45" podUID="c4a097ba-de0d-4e75-973d-ce0fa1163477" Nov 4 23:47:20.779179 containerd[1601]: time="2025-11-04T23:47:20.779122809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jfsqb,Uid:996cd107-7b4a-4765-be02-ba532f9cecae,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dfeeef7dea28d89ebf8b20b306a5b60797d1b317186900ad84ea450327e23c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.779359 kubelet[2821]: E1104 23:47:20.779311 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dfeeef7dea28d89ebf8b20b306a5b60797d1b317186900ad84ea450327e23c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.779448 kubelet[2821]: E1104 23:47:20.779368 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dfeeef7dea28d89ebf8b20b306a5b60797d1b317186900ad84ea450327e23c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jfsqb" Nov 4 23:47:20.779448 kubelet[2821]: E1104 23:47:20.779400 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dfeeef7dea28d89ebf8b20b306a5b60797d1b317186900ad84ea450327e23c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jfsqb" Nov 4 23:47:20.779668 kubelet[2821]: E1104 23:47:20.779462 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jfsqb_kube-system(996cd107-7b4a-4765-be02-ba532f9cecae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jfsqb_kube-system(996cd107-7b4a-4765-be02-ba532f9cecae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0dfeeef7dea28d89ebf8b20b306a5b60797d1b317186900ad84ea450327e23c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jfsqb" podUID="996cd107-7b4a-4765-be02-ba532f9cecae" Nov 4 23:47:20.780517 containerd[1601]: time="2025-11-04T23:47:20.780484200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfb77c96-hmxm7,Uid:c35617ca-16ee-4ea4-b266-2395bc382e38,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3b228fa7d2481c9489ead88fa87493926850900ed60aac3b6f4055f9c8ae8ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.781042 kubelet[2821]: E1104 23:47:20.781002 2821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3b228fa7d2481c9489ead88fa87493926850900ed60aac3b6f4055f9c8ae8ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:47:20.781139 kubelet[2821]: E1104 23:47:20.781069 2821 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3b228fa7d2481c9489ead88fa87493926850900ed60aac3b6f4055f9c8ae8ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bfb77c96-hmxm7" Nov 4 23:47:20.781139 kubelet[2821]: E1104 23:47:20.781110 2821 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3b228fa7d2481c9489ead88fa87493926850900ed60aac3b6f4055f9c8ae8ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bfb77c96-hmxm7" Nov 4 23:47:20.781226 kubelet[2821]: E1104 23:47:20.781158 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bfb77c96-hmxm7_calico-system(c35617ca-16ee-4ea4-b266-2395bc382e38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bfb77c96-hmxm7_calico-system(c35617ca-16ee-4ea4-b266-2395bc382e38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3b228fa7d2481c9489ead88fa87493926850900ed60aac3b6f4055f9c8ae8ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bfb77c96-hmxm7" podUID="c35617ca-16ee-4ea4-b266-2395bc382e38" Nov 4 23:47:21.072148 systemd[1]: run-netns-cni\x2d1b204606\x2d3950\x2d4c21\x2da24b\x2d473b6c7b1126.mount: Deactivated successfully. Nov 4 23:47:27.098399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount166709038.mount: Deactivated successfully. Nov 4 23:47:29.182441 containerd[1601]: time="2025-11-04T23:47:29.182350302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:47:29.209809 containerd[1601]: time="2025-11-04T23:47:29.209644459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 4 23:47:29.224543 containerd[1601]: time="2025-11-04T23:47:29.224472613Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:47:29.243857 kubelet[2821]: I1104 23:47:29.243683 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:47:29.244960 kubelet[2821]: E1104 23:47:29.244215 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:29.245034 containerd[1601]: time="2025-11-04T23:47:29.244980157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:47:29.245555 containerd[1601]: time="2025-11-04T23:47:29.245515660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.970670993s" Nov 4 23:47:29.245642 containerd[1601]: time="2025-11-04T23:47:29.245554882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 4 23:47:29.358443 containerd[1601]: time="2025-11-04T23:47:29.358369024Z" level=info msg="CreateContainer within sandbox \"d99a08c66ba847f0a60f0b6a8ab6674ebcd7b384fdf35f4a565343bbbbbf0dde\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 23:47:29.358651 kubelet[2821]: E1104 23:47:29.358379 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:29.483536 containerd[1601]: time="2025-11-04T23:47:29.483387455Z" level=info msg="Container db79753ced536ed20ef1b1468ccb41af746f4af7d6381afecb94bccf7bedd41a: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:47:29.723391 containerd[1601]: time="2025-11-04T23:47:29.723324971Z" level=info msg="CreateContainer within sandbox \"d99a08c66ba847f0a60f0b6a8ab6674ebcd7b384fdf35f4a565343bbbbbf0dde\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"db79753ced536ed20ef1b1468ccb41af746f4af7d6381afecb94bccf7bedd41a\"" Nov 4 23:47:29.724155 containerd[1601]: time="2025-11-04T23:47:29.724119313Z" level=info msg="StartContainer for \"db79753ced536ed20ef1b1468ccb41af746f4af7d6381afecb94bccf7bedd41a\"" Nov 4 23:47:29.726089 containerd[1601]: time="2025-11-04T23:47:29.726056245Z" level=info msg="connecting to shim db79753ced536ed20ef1b1468ccb41af746f4af7d6381afecb94bccf7bedd41a" address="unix:///run/containerd/s/45c562d20f4842057d1369ad68aaac5d6f4e1b504a7e76a6283fffe5816cf8b9" protocol=ttrpc version=3 Nov 4 23:47:29.757085 systemd[1]: Started cri-containerd-db79753ced536ed20ef1b1468ccb41af746f4af7d6381afecb94bccf7bedd41a.scope - libcontainer container db79753ced536ed20ef1b1468ccb41af746f4af7d6381afecb94bccf7bedd41a. Nov 4 23:47:29.860564 containerd[1601]: time="2025-11-04T23:47:29.860483581Z" level=info msg="StartContainer for \"db79753ced536ed20ef1b1468ccb41af746f4af7d6381afecb94bccf7bedd41a\" returns successfully" Nov 4 23:47:29.914049 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 23:47:29.915320 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 23:47:30.303837 kubelet[2821]: E1104 23:47:30.303546 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:30.340304 kubelet[2821]: I1104 23:47:30.338772 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ftmnr" podStartSLOduration=1.589546927 podStartE2EDuration="25.338751157s" podCreationTimestamp="2025-11-04 23:47:05 +0000 UTC" firstStartedPulling="2025-11-04 23:47:05.4971425 +0000 UTC m=+22.444397759" lastFinishedPulling="2025-11-04 23:47:29.24634673 +0000 UTC m=+46.193601989" observedRunningTime="2025-11-04 23:47:30.33740664 +0000 UTC m=+47.284661899" watchObservedRunningTime="2025-11-04 23:47:30.338751157 +0000 UTC m=+47.286006416" Nov 4 23:47:30.386283 kubelet[2821]: I1104 23:47:30.386207 2821 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8-whisker-ca-bundle\") pod \"62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8\" (UID: \"62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8\") " Nov 4 23:47:30.386449 kubelet[2821]: I1104 23:47:30.386307 2821 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8-whisker-backend-key-pair\") pod \"62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8\" (UID: \"62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8\") " Nov 4 23:47:30.386449 kubelet[2821]: I1104 23:47:30.386328 2821 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thtxk\" (UniqueName: \"kubernetes.io/projected/62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8-kube-api-access-thtxk\") pod \"62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8\" (UID: \"62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8\") " Nov 4 23:47:30.389961 kubelet[2821]: I1104 23:47:30.388592 2821 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8" (UID: "62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:47:30.400790 kubelet[2821]: I1104 23:47:30.400744 2821 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8" (UID: "62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 23:47:30.401655 systemd[1]: var-lib-kubelet-pods-62847ab3\x2d7c8e\x2d4cdf\x2da2a4\x2dd5f59a1f5dd8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dthtxk.mount: Deactivated successfully. Nov 4 23:47:30.401744 kubelet[2821]: I1104 23:47:30.401686 2821 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8-kube-api-access-thtxk" (OuterVolumeSpecName: "kube-api-access-thtxk") pod "62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8" (UID: "62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8"). InnerVolumeSpecName "kube-api-access-thtxk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:47:30.401933 systemd[1]: var-lib-kubelet-pods-62847ab3\x2d7c8e\x2d4cdf\x2da2a4\x2dd5f59a1f5dd8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 23:47:30.487196 kubelet[2821]: I1104 23:47:30.487138 2821 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 4 23:47:30.487196 kubelet[2821]: I1104 23:47:30.487173 2821 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-thtxk\" (UniqueName: \"kubernetes.io/projected/62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8-kube-api-access-thtxk\") on node \"localhost\" DevicePath \"\"" Nov 4 23:47:30.487196 kubelet[2821]: I1104 23:47:30.487183 2821 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 4 23:47:31.175202 systemd[1]: Removed slice kubepods-besteffort-pod62847ab3_7c8e_4cdf_a2a4_d5f59a1f5dd8.slice - libcontainer container kubepods-besteffort-pod62847ab3_7c8e_4cdf_a2a4_d5f59a1f5dd8.slice. Nov 4 23:47:31.370051 systemd[1]: Created slice kubepods-besteffort-pod0be97d2f_7ecc_4809_a2c9_d583b48d2a01.slice - libcontainer container kubepods-besteffort-pod0be97d2f_7ecc_4809_a2c9_d583b48d2a01.slice. Nov 4 23:47:31.394936 kubelet[2821]: I1104 23:47:31.394833 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn6pl\" (UniqueName: \"kubernetes.io/projected/0be97d2f-7ecc-4809-a2c9-d583b48d2a01-kube-api-access-qn6pl\") pod \"whisker-944495598-7w457\" (UID: \"0be97d2f-7ecc-4809-a2c9-d583b48d2a01\") " pod="calico-system/whisker-944495598-7w457" Nov 4 23:47:31.394936 kubelet[2821]: I1104 23:47:31.394925 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0be97d2f-7ecc-4809-a2c9-d583b48d2a01-whisker-backend-key-pair\") pod \"whisker-944495598-7w457\" (UID: \"0be97d2f-7ecc-4809-a2c9-d583b48d2a01\") " pod="calico-system/whisker-944495598-7w457" Nov 4 23:47:31.394936 kubelet[2821]: I1104 23:47:31.394948 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0be97d2f-7ecc-4809-a2c9-d583b48d2a01-whisker-ca-bundle\") pod \"whisker-944495598-7w457\" (UID: \"0be97d2f-7ecc-4809-a2c9-d583b48d2a01\") " pod="calico-system/whisker-944495598-7w457" Nov 4 23:47:31.674524 containerd[1601]: time="2025-11-04T23:47:31.674462086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-944495598-7w457,Uid:0be97d2f-7ecc-4809-a2c9-d583b48d2a01,Namespace:calico-system,Attempt:0,}" Nov 4 23:47:31.837777 systemd-networkd[1490]: calif09bc8d9ace: Link UP Nov 4 23:47:31.838609 systemd-networkd[1490]: calif09bc8d9ace: Gained carrier Nov 4 23:47:31.874323 containerd[1601]: 2025-11-04 23:47:31.701 [INFO][4038] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:47:31.874323 containerd[1601]: 2025-11-04 23:47:31.720 [INFO][4038] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--944495598--7w457-eth0 whisker-944495598- calico-system 0be97d2f-7ecc-4809-a2c9-d583b48d2a01 951 0 2025-11-04 23:47:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:944495598 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-944495598-7w457 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif09bc8d9ace [] [] }} ContainerID="5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" Namespace="calico-system" Pod="whisker-944495598-7w457" WorkloadEndpoint="localhost-k8s-whisker--944495598--7w457-" Nov 4 23:47:31.874323 containerd[1601]: 2025-11-04 23:47:31.721 [INFO][4038] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" Namespace="calico-system" Pod="whisker-944495598-7w457" WorkloadEndpoint="localhost-k8s-whisker--944495598--7w457-eth0" Nov 4 23:47:31.874323 containerd[1601]: 2025-11-04 23:47:31.787 [INFO][4053] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" HandleID="k8s-pod-network.5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" Workload="localhost-k8s-whisker--944495598--7w457-eth0" Nov 4 23:47:31.874591 containerd[1601]: 2025-11-04 23:47:31.788 [INFO][4053] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" HandleID="k8s-pod-network.5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" Workload="localhost-k8s-whisker--944495598--7w457-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ce790), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-944495598-7w457", "timestamp":"2025-11-04 23:47:31.787705954 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:47:31.874591 containerd[1601]: 2025-11-04 23:47:31.788 [INFO][4053] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:47:31.874591 containerd[1601]: 2025-11-04 23:47:31.788 [INFO][4053] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:47:31.874591 containerd[1601]: 2025-11-04 23:47:31.788 [INFO][4053] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:47:31.874591 containerd[1601]: 2025-11-04 23:47:31.797 [INFO][4053] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" host="localhost" Nov 4 23:47:31.874591 containerd[1601]: 2025-11-04 23:47:31.802 [INFO][4053] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:47:31.874591 containerd[1601]: 2025-11-04 23:47:31.806 [INFO][4053] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:47:31.874591 containerd[1601]: 2025-11-04 23:47:31.809 [INFO][4053] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:31.874591 containerd[1601]: 2025-11-04 23:47:31.812 [INFO][4053] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:31.874591 containerd[1601]: 2025-11-04 23:47:31.812 [INFO][4053] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" host="localhost" Nov 4 23:47:31.874817 containerd[1601]: 2025-11-04 23:47:31.814 [INFO][4053] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b Nov 4 23:47:31.874817 containerd[1601]: 2025-11-04 23:47:31.820 [INFO][4053] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" host="localhost" Nov 4 23:47:31.874817 containerd[1601]: 2025-11-04 23:47:31.825 [INFO][4053] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" host="localhost" Nov 4 23:47:31.874817 containerd[1601]: 2025-11-04 23:47:31.825 [INFO][4053] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" host="localhost" Nov 4 23:47:31.874817 containerd[1601]: 2025-11-04 23:47:31.825 [INFO][4053] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:47:31.874817 containerd[1601]: 2025-11-04 23:47:31.825 [INFO][4053] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" HandleID="k8s-pod-network.5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" Workload="localhost-k8s-whisker--944495598--7w457-eth0" Nov 4 23:47:31.875026 containerd[1601]: 2025-11-04 23:47:31.829 [INFO][4038] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" Namespace="calico-system" Pod="whisker-944495598-7w457" WorkloadEndpoint="localhost-k8s-whisker--944495598--7w457-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--944495598--7w457-eth0", GenerateName:"whisker-944495598-", Namespace:"calico-system", SelfLink:"", UID:"0be97d2f-7ecc-4809-a2c9-d583b48d2a01", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 47, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"944495598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-944495598-7w457", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif09bc8d9ace", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:31.875026 containerd[1601]: 2025-11-04 23:47:31.829 [INFO][4038] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" Namespace="calico-system" Pod="whisker-944495598-7w457" WorkloadEndpoint="localhost-k8s-whisker--944495598--7w457-eth0" Nov 4 23:47:31.875107 containerd[1601]: 2025-11-04 23:47:31.829 [INFO][4038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif09bc8d9ace ContainerID="5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" Namespace="calico-system" Pod="whisker-944495598-7w457" WorkloadEndpoint="localhost-k8s-whisker--944495598--7w457-eth0" Nov 4 23:47:31.875107 containerd[1601]: 2025-11-04 23:47:31.838 [INFO][4038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" Namespace="calico-system" Pod="whisker-944495598-7w457" WorkloadEndpoint="localhost-k8s-whisker--944495598--7w457-eth0" Nov 4 23:47:31.875155 containerd[1601]: 2025-11-04 23:47:31.839 [INFO][4038] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" Namespace="calico-system" Pod="whisker-944495598-7w457" WorkloadEndpoint="localhost-k8s-whisker--944495598--7w457-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--944495598--7w457-eth0", GenerateName:"whisker-944495598-", Namespace:"calico-system", SelfLink:"", UID:"0be97d2f-7ecc-4809-a2c9-d583b48d2a01", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 47, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"944495598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b", Pod:"whisker-944495598-7w457", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif09bc8d9ace", MAC:"ae:a2:92:45:57:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:31.875205 containerd[1601]: 2025-11-04 23:47:31.870 [INFO][4038] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" Namespace="calico-system" Pod="whisker-944495598-7w457" WorkloadEndpoint="localhost-k8s-whisker--944495598--7w457-eth0" Nov 4 23:47:32.164247 containerd[1601]: time="2025-11-04T23:47:32.164091698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfb77c96-hmxm7,Uid:c35617ca-16ee-4ea4-b266-2395bc382e38,Namespace:calico-system,Attempt:0,}" Nov 4 23:47:32.164498 containerd[1601]: time="2025-11-04T23:47:32.164314534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qfh2g,Uid:b6136b3a-c7e7-4b68-a7f8-18b611db9e11,Namespace:calico-system,Attempt:0,}" Nov 4 23:47:32.468280 systemd-networkd[1490]: vxlan.calico: Link UP Nov 4 23:47:32.468298 systemd-networkd[1490]: vxlan.calico: Gained carrier Nov 4 23:47:32.663765 containerd[1601]: time="2025-11-04T23:47:32.663684672Z" level=info msg="connecting to shim 5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b" address="unix:///run/containerd/s/c3ec81a3b5b0afa5023ba333ed46294af76731889b4083a5bc503129ae3e7a85" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:47:32.702987 systemd-networkd[1490]: cali98577adf519: Link UP Nov 4 23:47:32.705427 systemd-networkd[1490]: cali98577adf519: Gained carrier Nov 4 23:47:32.714124 systemd[1]: Started cri-containerd-5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b.scope - libcontainer container 5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b. Nov 4 23:47:32.744331 containerd[1601]: 2025-11-04 23:47:32.566 [INFO][4236] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7bfb77c96--hmxm7-eth0 calico-kube-controllers-7bfb77c96- calico-system c35617ca-16ee-4ea4-b266-2395bc382e38 874 0 2025-11-04 23:47:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bfb77c96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7bfb77c96-hmxm7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali98577adf519 [] [] }} ContainerID="1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" Namespace="calico-system" Pod="calico-kube-controllers-7bfb77c96-hmxm7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfb77c96--hmxm7-" Nov 4 23:47:32.744331 containerd[1601]: 2025-11-04 23:47:32.566 [INFO][4236] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" Namespace="calico-system" Pod="calico-kube-controllers-7bfb77c96-hmxm7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfb77c96--hmxm7-eth0" Nov 4 23:47:32.744331 containerd[1601]: 2025-11-04 23:47:32.619 [INFO][4268] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" HandleID="k8s-pod-network.1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" Workload="localhost-k8s-calico--kube--controllers--7bfb77c96--hmxm7-eth0" Nov 4 23:47:32.745148 containerd[1601]: 2025-11-04 23:47:32.619 [INFO][4268] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" HandleID="k8s-pod-network.1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" Workload="localhost-k8s-calico--kube--controllers--7bfb77c96--hmxm7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df920), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7bfb77c96-hmxm7", "timestamp":"2025-11-04 23:47:32.619461846 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:47:32.745148 containerd[1601]: 2025-11-04 23:47:32.619 [INFO][4268] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:47:32.745148 containerd[1601]: 2025-11-04 23:47:32.619 [INFO][4268] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:47:32.745148 containerd[1601]: 2025-11-04 23:47:32.619 [INFO][4268] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:47:32.745148 containerd[1601]: 2025-11-04 23:47:32.636 [INFO][4268] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" host="localhost" Nov 4 23:47:32.745148 containerd[1601]: 2025-11-04 23:47:32.642 [INFO][4268] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:47:32.745148 containerd[1601]: 2025-11-04 23:47:32.649 [INFO][4268] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:47:32.745148 containerd[1601]: 2025-11-04 23:47:32.651 [INFO][4268] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:32.745148 containerd[1601]: 2025-11-04 23:47:32.656 [INFO][4268] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:32.745148 containerd[1601]: 2025-11-04 23:47:32.656 [INFO][4268] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" host="localhost" Nov 4 23:47:32.745571 containerd[1601]: 2025-11-04 23:47:32.663 [INFO][4268] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b Nov 4 23:47:32.745571 containerd[1601]: 2025-11-04 23:47:32.672 [INFO][4268] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" host="localhost" Nov 4 23:47:32.745571 containerd[1601]: 2025-11-04 23:47:32.689 [INFO][4268] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" host="localhost" Nov 4 23:47:32.745571 containerd[1601]: 2025-11-04 23:47:32.689 [INFO][4268] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" host="localhost" Nov 4 23:47:32.745571 containerd[1601]: 2025-11-04 23:47:32.689 [INFO][4268] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:47:32.745571 containerd[1601]: 2025-11-04 23:47:32.689 [INFO][4268] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" HandleID="k8s-pod-network.1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" Workload="localhost-k8s-calico--kube--controllers--7bfb77c96--hmxm7-eth0" Nov 4 23:47:32.745825 containerd[1601]: 2025-11-04 23:47:32.697 [INFO][4236] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" Namespace="calico-system" Pod="calico-kube-controllers-7bfb77c96-hmxm7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfb77c96--hmxm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bfb77c96--hmxm7-eth0", GenerateName:"calico-kube-controllers-7bfb77c96-", Namespace:"calico-system", SelfLink:"", UID:"c35617ca-16ee-4ea4-b266-2395bc382e38", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 47, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfb77c96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7bfb77c96-hmxm7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali98577adf519", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:32.745973 containerd[1601]: 2025-11-04 23:47:32.697 [INFO][4236] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" Namespace="calico-system" Pod="calico-kube-controllers-7bfb77c96-hmxm7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfb77c96--hmxm7-eth0" Nov 4 23:47:32.745973 containerd[1601]: 2025-11-04 23:47:32.697 [INFO][4236] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98577adf519 ContainerID="1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" Namespace="calico-system" Pod="calico-kube-controllers-7bfb77c96-hmxm7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfb77c96--hmxm7-eth0" Nov 4 23:47:32.745973 containerd[1601]: 2025-11-04 23:47:32.705 [INFO][4236] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" Namespace="calico-system" Pod="calico-kube-controllers-7bfb77c96-hmxm7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfb77c96--hmxm7-eth0" Nov 4 23:47:32.746084 containerd[1601]: 2025-11-04 23:47:32.706 [INFO][4236] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" Namespace="calico-system" Pod="calico-kube-controllers-7bfb77c96-hmxm7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfb77c96--hmxm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bfb77c96--hmxm7-eth0", GenerateName:"calico-kube-controllers-7bfb77c96-", Namespace:"calico-system", SelfLink:"", UID:"c35617ca-16ee-4ea4-b266-2395bc382e38", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 47, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfb77c96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b", Pod:"calico-kube-controllers-7bfb77c96-hmxm7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali98577adf519", MAC:"8a:8a:41:2a:0f:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:32.746159 containerd[1601]: 2025-11-04 23:47:32.727 [INFO][4236] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" Namespace="calico-system" Pod="calico-kube-controllers-7bfb77c96-hmxm7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bfb77c96--hmxm7-eth0" Nov 4 23:47:32.749158 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:47:32.794340 systemd-networkd[1490]: cali9fb31984e08: Link UP Nov 4 23:47:32.795075 containerd[1601]: time="2025-11-04T23:47:32.794874954Z" level=info msg="connecting to shim 1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b" address="unix:///run/containerd/s/30661fec11470fadd1d53b36e4db673be275037833404c2a8c6abc43950502bd" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:47:32.795618 systemd-networkd[1490]: cali9fb31984e08: Gained carrier Nov 4 23:47:32.826628 containerd[1601]: 2025-11-04 23:47:32.637 [INFO][4248] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qfh2g-eth0 csi-node-driver- calico-system b6136b3a-c7e7-4b68-a7f8-18b611db9e11 737 0 2025-11-04 23:47:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qfh2g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9fb31984e08 [] [] }} ContainerID="eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" Namespace="calico-system" Pod="csi-node-driver-qfh2g" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfh2g-" Nov 4 23:47:32.826628 containerd[1601]: 2025-11-04 23:47:32.637 [INFO][4248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" Namespace="calico-system" Pod="csi-node-driver-qfh2g" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfh2g-eth0" Nov 4 23:47:32.826628 containerd[1601]: 2025-11-04 23:47:32.708 [INFO][4282] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" HandleID="k8s-pod-network.eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" Workload="localhost-k8s-csi--node--driver--qfh2g-eth0" Nov 4 23:47:32.827001 containerd[1601]: 2025-11-04 23:47:32.711 [INFO][4282] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" HandleID="k8s-pod-network.eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" Workload="localhost-k8s-csi--node--driver--qfh2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000457a30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qfh2g", "timestamp":"2025-11-04 23:47:32.70682535 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:47:32.827001 containerd[1601]: 2025-11-04 23:47:32.711 [INFO][4282] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:47:32.827001 containerd[1601]: 2025-11-04 23:47:32.711 [INFO][4282] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:47:32.827001 containerd[1601]: 2025-11-04 23:47:32.711 [INFO][4282] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:47:32.827001 containerd[1601]: 2025-11-04 23:47:32.735 [INFO][4282] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" host="localhost" Nov 4 23:47:32.827001 containerd[1601]: 2025-11-04 23:47:32.743 [INFO][4282] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:47:32.827001 containerd[1601]: 2025-11-04 23:47:32.754 [INFO][4282] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:47:32.827001 containerd[1601]: 2025-11-04 23:47:32.757 [INFO][4282] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:32.827001 containerd[1601]: 2025-11-04 23:47:32.760 [INFO][4282] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:32.827001 containerd[1601]: 2025-11-04 23:47:32.760 [INFO][4282] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" host="localhost" Nov 4 23:47:32.827298 containerd[1601]: 2025-11-04 23:47:32.762 [INFO][4282] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36 Nov 4 23:47:32.827298 containerd[1601]: 2025-11-04 23:47:32.771 [INFO][4282] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" host="localhost" Nov 4 23:47:32.827298 containerd[1601]: 2025-11-04 23:47:32.781 [INFO][4282] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" host="localhost" Nov 4 23:47:32.827298 containerd[1601]: 2025-11-04 23:47:32.782 [INFO][4282] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" host="localhost" Nov 4 23:47:32.827298 containerd[1601]: 2025-11-04 23:47:32.782 [INFO][4282] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:47:32.827298 containerd[1601]: 2025-11-04 23:47:32.782 [INFO][4282] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" HandleID="k8s-pod-network.eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" Workload="localhost-k8s-csi--node--driver--qfh2g-eth0" Nov 4 23:47:32.827481 containerd[1601]: 2025-11-04 23:47:32.788 [INFO][4248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" Namespace="calico-system" Pod="csi-node-driver-qfh2g" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfh2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qfh2g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6136b3a-c7e7-4b68-a7f8-18b611db9e11", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 47, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qfh2g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9fb31984e08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:32.827574 containerd[1601]: 2025-11-04 23:47:32.789 [INFO][4248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" Namespace="calico-system" Pod="csi-node-driver-qfh2g" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfh2g-eth0" Nov 4 23:47:32.827574 containerd[1601]: 2025-11-04 23:47:32.789 [INFO][4248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9fb31984e08 ContainerID="eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" Namespace="calico-system" Pod="csi-node-driver-qfh2g" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfh2g-eth0" Nov 4 23:47:32.827574 containerd[1601]: 2025-11-04 23:47:32.795 [INFO][4248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" Namespace="calico-system" Pod="csi-node-driver-qfh2g" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfh2g-eth0" Nov 4 23:47:32.827718 containerd[1601]: 2025-11-04 23:47:32.798 [INFO][4248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" Namespace="calico-system" Pod="csi-node-driver-qfh2g" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfh2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qfh2g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6136b3a-c7e7-4b68-a7f8-18b611db9e11", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 47, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36", Pod:"csi-node-driver-qfh2g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9fb31984e08", MAC:"0a:f7:a0:1f:12:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:32.827797 containerd[1601]: 2025-11-04 23:47:32.818 [INFO][4248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" Namespace="calico-system" Pod="csi-node-driver-qfh2g" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfh2g-eth0" Nov 4 23:47:32.838082 containerd[1601]: time="2025-11-04T23:47:32.838014982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-944495598-7w457,Uid:0be97d2f-7ecc-4809-a2c9-d583b48d2a01,Namespace:calico-system,Attempt:0,} returns sandbox id \"5927b118cce4d951030ab960a1af754614f294bab137bc2c565fbcf789b4f02b\"" Nov 4 23:47:32.842290 containerd[1601]: time="2025-11-04T23:47:32.842194591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:47:32.865521 containerd[1601]: time="2025-11-04T23:47:32.865468315Z" level=info msg="connecting to shim eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36" address="unix:///run/containerd/s/1bf77f08e624e1f85104da1fb8fd9a0b769a8af83c5ff30b8220361a52bdb008" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:47:32.868323 systemd[1]: Started cri-containerd-1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b.scope - libcontainer container 1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b. Nov 4 23:47:32.887517 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:47:32.900406 systemd[1]: Started cri-containerd-eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36.scope - libcontainer container eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36. Nov 4 23:47:32.925226 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:47:32.944165 containerd[1601]: time="2025-11-04T23:47:32.944067976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfb77c96-hmxm7,Uid:c35617ca-16ee-4ea4-b266-2395bc382e38,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a27e200f15d28de9e833ad3ae15ef693e02741f1c2d9704a3307ff481a6c15b\"" Nov 4 23:47:32.954058 containerd[1601]: time="2025-11-04T23:47:32.954000958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qfh2g,Uid:b6136b3a-c7e7-4b68-a7f8-18b611db9e11,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb8c9a398ad0b9d6426ffb4f7d02ffafb90c1ee5e4c62480fbf2351f99370d36\"" Nov 4 23:47:33.107524 kubelet[2821]: I1104 23:47:33.107309 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:47:33.109057 kubelet[2821]: E1104 23:47:33.108995 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:33.166167 kubelet[2821]: E1104 23:47:33.166119 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:33.166365 containerd[1601]: time="2025-11-04T23:47:33.166121808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8f987886-b4d7r,Uid:f48ec58e-62fc-4c79-8936-16e0e5b98045,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:47:33.166871 containerd[1601]: time="2025-11-04T23:47:33.166836362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb8dcb848-zz8td,Uid:7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:47:33.167054 containerd[1601]: time="2025-11-04T23:47:33.167005697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dt8tc,Uid:f1677806-4e2b-4950-9276-2412224d7bb8,Namespace:calico-system,Attempt:0,}" Nov 4 23:47:33.167204 containerd[1601]: time="2025-11-04T23:47:33.167178280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vdw45,Uid:c4a097ba-de0d-4e75-973d-ce0fa1163477,Namespace:kube-system,Attempt:0,}" Nov 4 23:47:33.173047 kubelet[2821]: I1104 23:47:33.169400 2821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8" path="/var/lib/kubelet/pods/62847ab3-7c8e-4cdf-a2a4-d5f59a1f5dd8/volumes" Nov 4 23:47:33.230935 containerd[1601]: time="2025-11-04T23:47:33.229086098Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:33.232570 containerd[1601]: time="2025-11-04T23:47:33.232428217Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:47:33.232570 containerd[1601]: time="2025-11-04T23:47:33.232539023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:47:33.235847 kubelet[2821]: E1104 23:47:33.234712 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:47:33.236645 kubelet[2821]: E1104 23:47:33.236245 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:47:33.238125 containerd[1601]: time="2025-11-04T23:47:33.238086237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:47:33.245931 kubelet[2821]: E1104 23:47:33.245666 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:67da652ae71641aca2aebf2d1372af6d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qn6pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-944495598-7w457_calico-system(0be97d2f-7ecc-4809-a2c9-d583b48d2a01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:33.286616 containerd[1601]: time="2025-11-04T23:47:33.286563496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db79753ced536ed20ef1b1468ccb41af746f4af7d6381afecb94bccf7bedd41a\" id:\"1a6c09e4c6e10ab8c2f83dae856af979e32e2fe520f63cf0eb7f4172edfc1a12\" pid:4483 exit_status:1 exited_at:{seconds:1762300053 nanos:285208878}" Nov 4 23:47:33.365493 systemd-networkd[1490]: cali4b6a34a1550: Link UP Nov 4 23:47:33.369296 systemd-networkd[1490]: cali4b6a34a1550: Gained carrier Nov 4 23:47:33.420460 containerd[1601]: time="2025-11-04T23:47:33.420395971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db79753ced536ed20ef1b1468ccb41af746f4af7d6381afecb94bccf7bedd41a\" id:\"fd1ef7622e8bcbe60e052634dc9f6087fe22ba55ee371d9fd84c6a3f85f90485\" pid:4587 exit_status:1 exited_at:{seconds:1762300053 nanos:420035007}" Nov 4 23:47:33.514108 systemd-networkd[1490]: vxlan.calico: Gained IPv6LL Nov 4 23:47:33.585805 containerd[1601]: time="2025-11-04T23:47:33.585714528Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:33.626281 containerd[1601]: time="2025-11-04T23:47:33.626034079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:47:33.626281 containerd[1601]: time="2025-11-04T23:47:33.626079624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:47:33.626625 kubelet[2821]: E1104 23:47:33.626531 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:47:33.626700 kubelet[2821]: E1104 23:47:33.626643 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:47:33.627758 kubelet[2821]: E1104 23:47:33.627598 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frf4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7bfb77c96-hmxm7_calico-system(c35617ca-16ee-4ea4-b266-2395bc382e38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:33.627923 containerd[1601]: time="2025-11-04T23:47:33.627809923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:47:33.628998 kubelet[2821]: E1104 23:47:33.628938 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfb77c96-hmxm7" podUID="c35617ca-16ee-4ea4-b266-2395bc382e38" Nov 4 23:47:33.661729 containerd[1601]: 2025-11-04 23:47:33.247 [INFO][4528] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--vdw45-eth0 coredns-674b8bbfcf- kube-system c4a097ba-de0d-4e75-973d-ce0fa1163477 863 0 2025-11-04 23:46:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-vdw45 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4b6a34a1550 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdw45" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdw45-" Nov 4 23:47:33.661729 containerd[1601]: 2025-11-04 23:47:33.248 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdw45" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdw45-eth0" Nov 4 23:47:33.661729 containerd[1601]: 2025-11-04 23:47:33.280 [INFO][4560] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" HandleID="k8s-pod-network.c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" Workload="localhost-k8s-coredns--674b8bbfcf--vdw45-eth0" Nov 4 23:47:33.662013 containerd[1601]: 2025-11-04 23:47:33.280 [INFO][4560] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" HandleID="k8s-pod-network.c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" Workload="localhost-k8s-coredns--674b8bbfcf--vdw45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-vdw45", "timestamp":"2025-11-04 23:47:33.280159152 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:47:33.662013 containerd[1601]: 2025-11-04 23:47:33.280 [INFO][4560] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:47:33.662013 containerd[1601]: 2025-11-04 23:47:33.280 [INFO][4560] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:47:33.662013 containerd[1601]: 2025-11-04 23:47:33.280 [INFO][4560] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:47:33.662013 containerd[1601]: 2025-11-04 23:47:33.288 [INFO][4560] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" host="localhost" Nov 4 23:47:33.662013 containerd[1601]: 2025-11-04 23:47:33.305 [INFO][4560] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:47:33.662013 containerd[1601]: 2025-11-04 23:47:33.315 [INFO][4560] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:47:33.662013 containerd[1601]: 2025-11-04 23:47:33.321 [INFO][4560] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:33.662013 containerd[1601]: 2025-11-04 23:47:33.325 [INFO][4560] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:33.662013 containerd[1601]: 2025-11-04 23:47:33.325 [INFO][4560] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" host="localhost" Nov 4 23:47:33.662321 containerd[1601]: 2025-11-04 23:47:33.330 [INFO][4560] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822 Nov 4 23:47:33.662321 containerd[1601]: 2025-11-04 23:47:33.337 [INFO][4560] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" host="localhost" Nov 4 23:47:33.662321 containerd[1601]: 2025-11-04 23:47:33.347 [INFO][4560] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" host="localhost" Nov 4 23:47:33.662321 containerd[1601]: 2025-11-04 23:47:33.348 [INFO][4560] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" host="localhost" Nov 4 23:47:33.662321 containerd[1601]: 2025-11-04 23:47:33.348 [INFO][4560] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:47:33.662321 containerd[1601]: 2025-11-04 23:47:33.348 [INFO][4560] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" HandleID="k8s-pod-network.c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" Workload="localhost-k8s-coredns--674b8bbfcf--vdw45-eth0" Nov 4 23:47:33.662458 containerd[1601]: 2025-11-04 23:47:33.357 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdw45" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdw45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--vdw45-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c4a097ba-de0d-4e75-973d-ce0fa1163477", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-vdw45", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b6a34a1550", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:33.662518 containerd[1601]: 2025-11-04 23:47:33.357 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdw45" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdw45-eth0" Nov 4 23:47:33.662518 containerd[1601]: 2025-11-04 23:47:33.357 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b6a34a1550 ContainerID="c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdw45" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdw45-eth0" Nov 4 23:47:33.662518 containerd[1601]: 2025-11-04 23:47:33.374 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdw45" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdw45-eth0" Nov 4 23:47:33.673150 containerd[1601]: 2025-11-04 23:47:33.381 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdw45" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdw45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--vdw45-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c4a097ba-de0d-4e75-973d-ce0fa1163477", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822", Pod:"coredns-674b8bbfcf-vdw45", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b6a34a1550", MAC:"7a:b0:4d:0a:39:bc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:33.673150 containerd[1601]: 2025-11-04 23:47:33.657 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdw45" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdw45-eth0" Nov 4 23:47:33.706111 systemd-networkd[1490]: calif09bc8d9ace: Gained IPv6LL Nov 4 23:47:33.799043 containerd[1601]: time="2025-11-04T23:47:33.798983099Z" level=info msg="connecting to shim c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822" address="unix:///run/containerd/s/a4d37fdaf9236a856ab02d4c78d5dfba256f0949b6d8e05f3ec698d64ac0fb1a" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:47:33.802186 systemd-networkd[1490]: cali7410ea649e2: Link UP Nov 4 23:47:33.803880 systemd-networkd[1490]: cali7410ea649e2: Gained carrier Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.232 [INFO][4523] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--dt8tc-eth0 goldmane-666569f655- calico-system f1677806-4e2b-4950-9276-2412224d7bb8 864 0 2025-11-04 23:47:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-dt8tc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7410ea649e2 [] [] }} ContainerID="8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" Namespace="calico-system" Pod="goldmane-666569f655-dt8tc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dt8tc-" Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.232 [INFO][4523] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" Namespace="calico-system" Pod="goldmane-666569f655-dt8tc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dt8tc-eth0" Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.280 [INFO][4552] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" HandleID="k8s-pod-network.8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" Workload="localhost-k8s-goldmane--666569f655--dt8tc-eth0" Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.280 [INFO][4552] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" HandleID="k8s-pod-network.8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" Workload="localhost-k8s-goldmane--666569f655--dt8tc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-dt8tc", "timestamp":"2025-11-04 23:47:33.280297109 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.280 [INFO][4552] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.348 [INFO][4552] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.348 [INFO][4552] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.628 [INFO][4552] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" host="localhost" Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.739 [INFO][4552] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.752 [INFO][4552] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.755 [INFO][4552] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.759 [INFO][4552] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.759 [INFO][4552] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" host="localhost" Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.760 [INFO][4552] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78 Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.781 [INFO][4552] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" host="localhost" Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.790 [INFO][4552] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" host="localhost" Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.791 [INFO][4552] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" host="localhost" Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.791 [INFO][4552] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:47:33.828248 containerd[1601]: 2025-11-04 23:47:33.791 [INFO][4552] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" HandleID="k8s-pod-network.8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" Workload="localhost-k8s-goldmane--666569f655--dt8tc-eth0" Nov 4 23:47:33.828887 containerd[1601]: 2025-11-04 23:47:33.798 [INFO][4523] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" Namespace="calico-system" Pod="goldmane-666569f655-dt8tc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dt8tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--dt8tc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f1677806-4e2b-4950-9276-2412224d7bb8", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 47, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-dt8tc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7410ea649e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:33.828887 containerd[1601]: 2025-11-04 23:47:33.798 [INFO][4523] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" Namespace="calico-system" Pod="goldmane-666569f655-dt8tc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dt8tc-eth0" Nov 4 23:47:33.828887 containerd[1601]: 2025-11-04 23:47:33.798 [INFO][4523] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7410ea649e2 ContainerID="8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" Namespace="calico-system" Pod="goldmane-666569f655-dt8tc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dt8tc-eth0" Nov 4 23:47:33.828887 containerd[1601]: 2025-11-04 23:47:33.805 [INFO][4523] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" Namespace="calico-system" Pod="goldmane-666569f655-dt8tc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dt8tc-eth0" Nov 4 23:47:33.828887 containerd[1601]: 2025-11-04 23:47:33.805 [INFO][4523] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" Namespace="calico-system" Pod="goldmane-666569f655-dt8tc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dt8tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--dt8tc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f1677806-4e2b-4950-9276-2412224d7bb8", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 47, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78", Pod:"goldmane-666569f655-dt8tc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7410ea649e2", MAC:"72:6b:16:11:38:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:33.828887 containerd[1601]: 2025-11-04 23:47:33.820 [INFO][4523] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" Namespace="calico-system" Pod="goldmane-666569f655-dt8tc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dt8tc-eth0" Nov 4 23:47:33.846221 systemd[1]: Started cri-containerd-c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822.scope - libcontainer container c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822. Nov 4 23:47:33.858807 containerd[1601]: time="2025-11-04T23:47:33.858747556Z" level=info msg="connecting to shim 8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78" address="unix:///run/containerd/s/913209bdb5c993f73410e2150e9b10196677d0ab8a024b59ef322f15ec1c5a34" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:47:33.869299 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:47:33.885406 systemd[1]: Started cri-containerd-8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78.scope - libcontainer container 8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78. Nov 4 23:47:33.908820 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:47:33.933021 systemd-networkd[1490]: cali281a538f938: Link UP Nov 4 23:47:33.935480 systemd-networkd[1490]: cali281a538f938: Gained carrier Nov 4 23:47:33.972932 containerd[1601]: time="2025-11-04T23:47:33.972857065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vdw45,Uid:c4a097ba-de0d-4e75-973d-ce0fa1163477,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822\"" Nov 4 23:47:33.973815 kubelet[2821]: E1104 23:47:33.973762 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:33.980148 containerd[1601]: time="2025-11-04T23:47:33.980099194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dt8tc,Uid:f1677806-4e2b-4950-9276-2412224d7bb8,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e2c2ac44655aeb0cf5022ec797c9cf24da45a47dd15504b14d9095b918baa78\"" Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.310 [INFO][4500] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5cb8dcb848--zz8td-eth0 calico-apiserver-5cb8dcb848- calico-apiserver 7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff 867 0 2025-11-04 23:46:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cb8dcb848 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5cb8dcb848-zz8td eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali281a538f938 [] [] }} ContainerID="67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8dcb848-zz8td" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb8dcb848--zz8td-" Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.311 [INFO][4500] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8dcb848-zz8td" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb8dcb848--zz8td-eth0" Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.389 [INFO][4594] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" HandleID="k8s-pod-network.67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" Workload="localhost-k8s-calico--apiserver--5cb8dcb848--zz8td-eth0" Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.391 [INFO][4594] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" HandleID="k8s-pod-network.67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" Workload="localhost-k8s-calico--apiserver--5cb8dcb848--zz8td-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000503c50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5cb8dcb848-zz8td", "timestamp":"2025-11-04 23:47:33.389424078 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.391 [INFO][4594] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.791 [INFO][4594] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.791 [INFO][4594] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.802 [INFO][4594] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" host="localhost" Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.843 [INFO][4594] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.857 [INFO][4594] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.864 [INFO][4594] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.877 [INFO][4594] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.877 [INFO][4594] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" host="localhost" Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.880 [INFO][4594] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474 Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.887 [INFO][4594] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" host="localhost" Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.907 [INFO][4594] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" host="localhost" Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.907 [INFO][4594] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" host="localhost" Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.907 [INFO][4594] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:47:33.985286 containerd[1601]: 2025-11-04 23:47:33.907 [INFO][4594] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" HandleID="k8s-pod-network.67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" Workload="localhost-k8s-calico--apiserver--5cb8dcb848--zz8td-eth0" Nov 4 23:47:33.986043 containerd[1601]: 2025-11-04 23:47:33.920 [INFO][4500] cni-plugin/k8s.go 418: Populated endpoint ContainerID="67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8dcb848-zz8td" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb8dcb848--zz8td-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cb8dcb848--zz8td-eth0", GenerateName:"calico-apiserver-5cb8dcb848-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 46, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cb8dcb848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5cb8dcb848-zz8td", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali281a538f938", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:33.986043 containerd[1601]: 2025-11-04 23:47:33.921 [INFO][4500] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8dcb848-zz8td" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb8dcb848--zz8td-eth0" Nov 4 23:47:33.986043 containerd[1601]: 2025-11-04 23:47:33.921 [INFO][4500] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali281a538f938 ContainerID="67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8dcb848-zz8td" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb8dcb848--zz8td-eth0" Nov 4 23:47:33.986043 containerd[1601]: 2025-11-04 23:47:33.939 [INFO][4500] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8dcb848-zz8td" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb8dcb848--zz8td-eth0" Nov 4 23:47:33.986043 containerd[1601]: 2025-11-04 23:47:33.943 [INFO][4500] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8dcb848-zz8td" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb8dcb848--zz8td-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cb8dcb848--zz8td-eth0", GenerateName:"calico-apiserver-5cb8dcb848-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 46, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cb8dcb848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474", Pod:"calico-apiserver-5cb8dcb848-zz8td", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali281a538f938", MAC:"f2:98:77:d6:a5:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:33.986043 containerd[1601]: 2025-11-04 23:47:33.977 [INFO][4500] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" Namespace="calico-apiserver" Pod="calico-apiserver-5cb8dcb848-zz8td" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cb8dcb848--zz8td-eth0" Nov 4 23:47:33.986368 containerd[1601]: time="2025-11-04T23:47:33.986311810Z" level=info msg="CreateContainer within sandbox \"c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:47:34.023684 containerd[1601]: time="2025-11-04T23:47:34.023229536Z" level=info msg="Container 9b128f96ad002d6f8950eca7b70f51f6b73996cd8dc20f459ff9c65c66165ed2: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:47:34.032236 containerd[1601]: time="2025-11-04T23:47:34.032189065Z" level=info msg="connecting to shim 67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474" address="unix:///run/containerd/s/1084c64008ae5dbc49151972a85aae3a83aaf595f4b0dc51cd74495e0b410929" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:47:34.035395 containerd[1601]: time="2025-11-04T23:47:34.035273833Z" level=info msg="CreateContainer within sandbox \"c9f622b2344b332d0b9757a77992c272f72d5f5bf14a619233e45c8a26f76822\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b128f96ad002d6f8950eca7b70f51f6b73996cd8dc20f459ff9c65c66165ed2\"" Nov 4 23:47:34.036511 containerd[1601]: time="2025-11-04T23:47:34.036489506Z" level=info msg="StartContainer for \"9b128f96ad002d6f8950eca7b70f51f6b73996cd8dc20f459ff9c65c66165ed2\"" Nov 4 23:47:34.038490 containerd[1601]: time="2025-11-04T23:47:34.038467353Z" level=info msg="connecting to shim 9b128f96ad002d6f8950eca7b70f51f6b73996cd8dc20f459ff9c65c66165ed2" address="unix:///run/containerd/s/a4d37fdaf9236a856ab02d4c78d5dfba256f0949b6d8e05f3ec698d64ac0fb1a" protocol=ttrpc version=3 Nov 4 23:47:34.045388 systemd-networkd[1490]: cali59e2a816007: Link UP Nov 4 23:47:34.047335 systemd-networkd[1490]: cali59e2a816007: Gained carrier Nov 4 23:47:34.051784 containerd[1601]: time="2025-11-04T23:47:34.051723285Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:34.063212 containerd[1601]: time="2025-11-04T23:47:34.063136622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:47:34.063317 containerd[1601]: time="2025-11-04T23:47:34.063136913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:47:34.066901 kubelet[2821]: E1104 23:47:34.065888 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:47:34.066901 kubelet[2821]: E1104 23:47:34.065963 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:47:34.066901 kubelet[2821]: E1104 23:47:34.066158 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fd88t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qfh2g_calico-system(b6136b3a-c7e7-4b68-a7f8-18b611db9e11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:34.067159 containerd[1601]: time="2025-11-04T23:47:34.066723418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:47:34.072713 systemd[1]: Started cri-containerd-9b128f96ad002d6f8950eca7b70f51f6b73996cd8dc20f459ff9c65c66165ed2.scope - libcontainer container 9b128f96ad002d6f8950eca7b70f51f6b73996cd8dc20f459ff9c65c66165ed2. Nov 4 23:47:34.078525 systemd[1]: Started cri-containerd-67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474.scope - libcontainer container 67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474. Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:33.329 [INFO][4495] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d8f987886--b4d7r-eth0 calico-apiserver-6d8f987886- calico-apiserver f48ec58e-62fc-4c79-8936-16e0e5b98045 870 0 2025-11-04 23:46:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d8f987886 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d8f987886-b4d7r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali59e2a816007 [] [] }} ContainerID="d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-b4d7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--b4d7r-" Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:33.330 [INFO][4495] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-b4d7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--b4d7r-eth0" Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:33.398 [INFO][4601] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" HandleID="k8s-pod-network.d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" Workload="localhost-k8s-calico--apiserver--6d8f987886--b4d7r-eth0" Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:33.398 [INFO][4601] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" HandleID="k8s-pod-network.d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" Workload="localhost-k8s-calico--apiserver--6d8f987886--b4d7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035efd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d8f987886-b4d7r", "timestamp":"2025-11-04 23:47:33.398426011 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:33.398 [INFO][4601] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:33.909 [INFO][4601] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:33.909 [INFO][4601] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:33.977 [INFO][4601] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" host="localhost" Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:33.991 [INFO][4601] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:33.998 [INFO][4601] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:34.000 [INFO][4601] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:34.007 [INFO][4601] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:34.007 [INFO][4601] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" host="localhost" Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:34.011 [INFO][4601] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51 Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:34.021 [INFO][4601] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" host="localhost" Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:34.031 [INFO][4601] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" host="localhost" Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:34.031 [INFO][4601] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" host="localhost" Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:34.031 [INFO][4601] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:47:34.088014 containerd[1601]: 2025-11-04 23:47:34.031 [INFO][4601] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" HandleID="k8s-pod-network.d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" Workload="localhost-k8s-calico--apiserver--6d8f987886--b4d7r-eth0" Nov 4 23:47:34.089154 containerd[1601]: 2025-11-04 23:47:34.042 [INFO][4495] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-b4d7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--b4d7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d8f987886--b4d7r-eth0", GenerateName:"calico-apiserver-6d8f987886-", Namespace:"calico-apiserver", SelfLink:"", UID:"f48ec58e-62fc-4c79-8936-16e0e5b98045", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 46, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8f987886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d8f987886-b4d7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59e2a816007", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:34.089154 containerd[1601]: 2025-11-04 23:47:34.042 [INFO][4495] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-b4d7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--b4d7r-eth0" Nov 4 23:47:34.089154 containerd[1601]: 2025-11-04 23:47:34.042 [INFO][4495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59e2a816007 ContainerID="d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-b4d7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--b4d7r-eth0" Nov 4 23:47:34.089154 containerd[1601]: 2025-11-04 23:47:34.047 [INFO][4495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-b4d7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--b4d7r-eth0" Nov 4 23:47:34.089154 containerd[1601]: 2025-11-04 23:47:34.071 [INFO][4495] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-b4d7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--b4d7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d8f987886--b4d7r-eth0", GenerateName:"calico-apiserver-6d8f987886-", Namespace:"calico-apiserver", SelfLink:"", UID:"f48ec58e-62fc-4c79-8936-16e0e5b98045", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 46, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8f987886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51", Pod:"calico-apiserver-6d8f987886-b4d7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59e2a816007", MAC:"92:7c:bf:ce:f7:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:34.089154 containerd[1601]: 2025-11-04 23:47:34.084 [INFO][4495] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-b4d7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--b4d7r-eth0" Nov 4 23:47:34.111967 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:47:34.130483 containerd[1601]: time="2025-11-04T23:47:34.130381006Z" level=info msg="connecting to shim d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51" address="unix:///run/containerd/s/ba401b866596a019350cffc892c6cf3608dbfdc3f4f559c8b0fd43045b2c2cb2" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:47:34.140605 containerd[1601]: time="2025-11-04T23:47:34.139402321Z" level=info msg="StartContainer for \"9b128f96ad002d6f8950eca7b70f51f6b73996cd8dc20f459ff9c65c66165ed2\" returns successfully" Nov 4 23:47:34.154338 systemd-networkd[1490]: cali9fb31984e08: Gained IPv6LL Nov 4 23:47:34.180163 systemd[1]: Started cri-containerd-d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51.scope - libcontainer container d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51. Nov 4 23:47:34.197783 containerd[1601]: time="2025-11-04T23:47:34.197685802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cb8dcb848-zz8td,Uid:7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"67564bbea627384e6cf473da50d366b1272ee147a36d484dfdac9c02d0a52474\"" Nov 4 23:47:34.204682 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:47:34.243971 containerd[1601]: time="2025-11-04T23:47:34.243886775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8f987886-b4d7r,Uid:f48ec58e-62fc-4c79-8936-16e0e5b98045,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d86535126d53a3e9ec16696b50420f8ec177a3030c8435ef0057efcfdf7d7b51\"" Nov 4 23:47:34.333041 kubelet[2821]: E1104 23:47:34.332994 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:34.333778 kubelet[2821]: E1104 23:47:34.333258 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfb77c96-hmxm7" podUID="c35617ca-16ee-4ea4-b266-2395bc382e38" Nov 4 23:47:34.418683 containerd[1601]: time="2025-11-04T23:47:34.418445022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:34.419739 containerd[1601]: time="2025-11-04T23:47:34.419675673Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:47:34.419825 containerd[1601]: time="2025-11-04T23:47:34.419799734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:47:34.420086 kubelet[2821]: E1104 23:47:34.420028 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:47:34.420167 kubelet[2821]: E1104 23:47:34.420104 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:47:34.420420 kubelet[2821]: E1104 23:47:34.420370 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qn6pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-944495598-7w457_calico-system(0be97d2f-7ecc-4809-a2c9-d583b48d2a01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:34.420537 containerd[1601]: time="2025-11-04T23:47:34.420484916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:47:34.422152 kubelet[2821]: E1104 23:47:34.422109 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-944495598-7w457" podUID="0be97d2f-7ecc-4809-a2c9-d583b48d2a01" Nov 4 23:47:34.474146 systemd-networkd[1490]: cali98577adf519: Gained IPv6LL Nov 4 23:47:34.730205 systemd-networkd[1490]: cali4b6a34a1550: Gained IPv6LL Nov 4 23:47:34.781764 containerd[1601]: time="2025-11-04T23:47:34.781688897Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:34.783069 containerd[1601]: time="2025-11-04T23:47:34.783031326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:47:34.783141 containerd[1601]: time="2025-11-04T23:47:34.783120624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:47:34.783376 kubelet[2821]: E1104 23:47:34.783315 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:47:34.783456 kubelet[2821]: E1104 23:47:34.783387 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:47:34.783763 containerd[1601]: time="2025-11-04T23:47:34.783731666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:47:34.783865 kubelet[2821]: E1104 23:47:34.783704 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsgzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dt8tc_calico-system(f1677806-4e2b-4950-9276-2412224d7bb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:34.785860 kubelet[2821]: E1104 23:47:34.784983 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dt8tc" podUID="f1677806-4e2b-4950-9276-2412224d7bb8" Nov 4 23:47:35.050204 systemd-networkd[1490]: cali281a538f938: Gained IPv6LL Nov 4 23:47:35.163789 kubelet[2821]: E1104 23:47:35.163729 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:35.164529 containerd[1601]: time="2025-11-04T23:47:35.164229503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8f987886-89zn9,Uid:22717a25-e0bd-4f8f-934c-4d9e328d23a6,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:47:35.164529 containerd[1601]: time="2025-11-04T23:47:35.164273786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jfsqb,Uid:996cd107-7b4a-4765-be02-ba532f9cecae,Namespace:kube-system,Attempt:0,}" Nov 4 23:47:35.189493 containerd[1601]: time="2025-11-04T23:47:35.189420487Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:35.195483 containerd[1601]: time="2025-11-04T23:47:35.195391827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:47:35.195650 containerd[1601]: time="2025-11-04T23:47:35.195517553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:47:35.195786 kubelet[2821]: E1104 23:47:35.195729 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:47:35.195866 kubelet[2821]: E1104 23:47:35.195799 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:47:35.196701 kubelet[2821]: E1104 23:47:35.196199 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fd88t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qfh2g_calico-system(b6136b3a-c7e7-4b68-a7f8-18b611db9e11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:35.197255 containerd[1601]: time="2025-11-04T23:47:35.197213668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:47:35.198874 kubelet[2821]: E1104 23:47:35.198593 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qfh2g" podUID="b6136b3a-c7e7-4b68-a7f8-18b611db9e11" Nov 4 23:47:35.306177 systemd-networkd[1490]: cali7410ea649e2: Gained IPv6LL Nov 4 23:47:35.335577 kubelet[2821]: E1104 23:47:35.335247 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:35.336678 kubelet[2821]: E1104 23:47:35.336532 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dt8tc" podUID="f1677806-4e2b-4950-9276-2412224d7bb8" Nov 4 23:47:35.338987 kubelet[2821]: E1104 23:47:35.338884 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-944495598-7w457" podUID="0be97d2f-7ecc-4809-a2c9-d583b48d2a01" Nov 4 23:47:35.338987 kubelet[2821]: E1104 23:47:35.338878 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qfh2g" podUID="b6136b3a-c7e7-4b68-a7f8-18b611db9e11" Nov 4 23:47:35.344511 systemd-networkd[1490]: caliae4c733803f: Link UP Nov 4 23:47:35.346083 systemd-networkd[1490]: caliae4c733803f: Gained carrier Nov 4 23:47:35.355892 kubelet[2821]: I1104 23:47:35.355756 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vdw45" podStartSLOduration=45.355732913 podStartE2EDuration="45.355732913s" podCreationTimestamp="2025-11-04 23:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:47:34.360108131 +0000 UTC m=+51.307363400" watchObservedRunningTime="2025-11-04 23:47:35.355732913 +0000 UTC m=+52.302988172" Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.240 [INFO][4880] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d8f987886--89zn9-eth0 calico-apiserver-6d8f987886- calico-apiserver 22717a25-e0bd-4f8f-934c-4d9e328d23a6 862 0 2025-11-04 23:46:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d8f987886 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d8f987886-89zn9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliae4c733803f [] [] }} ContainerID="c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-89zn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--89zn9-" Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.240 [INFO][4880] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-89zn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--89zn9-eth0" Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.280 [INFO][4905] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" HandleID="k8s-pod-network.c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" Workload="localhost-k8s-calico--apiserver--6d8f987886--89zn9-eth0" Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.281 [INFO][4905] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" HandleID="k8s-pod-network.c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" Workload="localhost-k8s-calico--apiserver--6d8f987886--89zn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b16c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d8f987886-89zn9", "timestamp":"2025-11-04 23:47:35.280983841 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.281 [INFO][4905] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.281 [INFO][4905] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.281 [INFO][4905] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.291 [INFO][4905] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" host="localhost" Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.300 [INFO][4905] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.308 [INFO][4905] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.312 [INFO][4905] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.314 [INFO][4905] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.315 [INFO][4905] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" host="localhost" Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.317 [INFO][4905] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.321 [INFO][4905] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" host="localhost" Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.330 [INFO][4905] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" host="localhost" Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.330 [INFO][4905] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" host="localhost" Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.330 [INFO][4905] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:47:35.388127 containerd[1601]: 2025-11-04 23:47:35.330 [INFO][4905] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" HandleID="k8s-pod-network.c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" Workload="localhost-k8s-calico--apiserver--6d8f987886--89zn9-eth0" Nov 4 23:47:35.389986 containerd[1601]: 2025-11-04 23:47:35.338 [INFO][4880] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-89zn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--89zn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d8f987886--89zn9-eth0", GenerateName:"calico-apiserver-6d8f987886-", Namespace:"calico-apiserver", SelfLink:"", UID:"22717a25-e0bd-4f8f-934c-4d9e328d23a6", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 46, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8f987886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d8f987886-89zn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae4c733803f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:35.389986 containerd[1601]: 2025-11-04 23:47:35.338 [INFO][4880] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-89zn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--89zn9-eth0" Nov 4 23:47:35.389986 containerd[1601]: 2025-11-04 23:47:35.338 [INFO][4880] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae4c733803f ContainerID="c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-89zn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--89zn9-eth0" Nov 4 23:47:35.389986 containerd[1601]: 2025-11-04 23:47:35.349 [INFO][4880] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-89zn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--89zn9-eth0" Nov 4 23:47:35.389986 containerd[1601]: 2025-11-04 23:47:35.350 [INFO][4880] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-89zn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--89zn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d8f987886--89zn9-eth0", GenerateName:"calico-apiserver-6d8f987886-", Namespace:"calico-apiserver", SelfLink:"", UID:"22717a25-e0bd-4f8f-934c-4d9e328d23a6", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 46, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8f987886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b", Pod:"calico-apiserver-6d8f987886-89zn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae4c733803f", MAC:"06:34:e3:f1:23:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:35.389986 containerd[1601]: 2025-11-04 23:47:35.379 [INFO][4880] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" Namespace="calico-apiserver" Pod="calico-apiserver-6d8f987886-89zn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d8f987886--89zn9-eth0" Nov 4 23:47:35.466431 containerd[1601]: time="2025-11-04T23:47:35.466312922Z" level=info msg="connecting to shim c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b" address="unix:///run/containerd/s/56002ac75fd38b234bb009b26a2e768d102da7608d5fdb792cee48cb362084e2" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:47:35.528631 systemd[1]: Started cri-containerd-c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b.scope - libcontainer container c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b. Nov 4 23:47:35.570678 containerd[1601]: time="2025-11-04T23:47:35.570443694Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:35.578820 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:47:35.683592 containerd[1601]: time="2025-11-04T23:47:35.683509577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:47:35.683769 containerd[1601]: time="2025-11-04T23:47:35.683626767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:47:35.683917 kubelet[2821]: E1104 23:47:35.683849 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:47:35.684000 kubelet[2821]: E1104 23:47:35.683928 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:47:35.684295 kubelet[2821]: E1104 23:47:35.684226 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jl2hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cb8dcb848-zz8td_calico-apiserver(7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:35.684419 containerd[1601]: time="2025-11-04T23:47:35.684323572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:47:35.685714 kubelet[2821]: E1104 23:47:35.685676 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cb8dcb848-zz8td" podUID="7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff" Nov 4 23:47:35.719292 containerd[1601]: time="2025-11-04T23:47:35.719235096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8f987886-89zn9,Uid:22717a25-e0bd-4f8f-934c-4d9e328d23a6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c872bd743bc3bb3680f5241f559bb08d61b35483ce27d219a41eb2562118e91b\"" Nov 4 23:47:35.882180 systemd-networkd[1490]: cali59e2a816007: Gained IPv6LL Nov 4 23:47:36.066117 containerd[1601]: time="2025-11-04T23:47:36.066019806Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:36.111522 containerd[1601]: time="2025-11-04T23:47:36.111434488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:47:36.111750 containerd[1601]: time="2025-11-04T23:47:36.111526340Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:47:36.111987 kubelet[2821]: E1104 23:47:36.111931 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:47:36.112090 kubelet[2821]: E1104 23:47:36.112005 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:47:36.112451 kubelet[2821]: E1104 23:47:36.112382 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62pfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d8f987886-b4d7r_calico-apiserver(f48ec58e-62fc-4c79-8936-16e0e5b98045): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:36.112654 containerd[1601]: time="2025-11-04T23:47:36.112425727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:47:36.113986 kubelet[2821]: E1104 23:47:36.113896 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8f987886-b4d7r" podUID="f48ec58e-62fc-4c79-8936-16e0e5b98045" Nov 4 23:47:36.133152 systemd-networkd[1490]: calic1d75aec1dc: Link UP Nov 4 23:47:36.134314 systemd-networkd[1490]: calic1d75aec1dc: Gained carrier Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:35.238 [INFO][4874] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--jfsqb-eth0 coredns-674b8bbfcf- kube-system 996cd107-7b4a-4765-be02-ba532f9cecae 873 0 2025-11-04 23:46:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-jfsqb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic1d75aec1dc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfsqb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jfsqb-" Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:35.238 [INFO][4874] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfsqb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jfsqb-eth0" Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:35.287 [INFO][4903] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" HandleID="k8s-pod-network.fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" Workload="localhost-k8s-coredns--674b8bbfcf--jfsqb-eth0" Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:35.287 [INFO][4903] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" HandleID="k8s-pod-network.fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" Workload="localhost-k8s-coredns--674b8bbfcf--jfsqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bf3d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-jfsqb", "timestamp":"2025-11-04 23:47:35.287493769 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:35.288 [INFO][4903] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:35.331 [INFO][4903] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:35.331 [INFO][4903] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:35.399 [INFO][4903] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" host="localhost" Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:35.434 [INFO][4903] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:35.765 [INFO][4903] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:35.805 [INFO][4903] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:35.808 [INFO][4903] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:35.808 [INFO][4903] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" host="localhost" Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:35.823 [INFO][4903] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3 Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:36.096 [INFO][4903] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" host="localhost" Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:36.124 [INFO][4903] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" host="localhost" Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:36.124 [INFO][4903] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" host="localhost" Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:36.125 [INFO][4903] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:47:36.151596 containerd[1601]: 2025-11-04 23:47:36.125 [INFO][4903] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" HandleID="k8s-pod-network.fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" Workload="localhost-k8s-coredns--674b8bbfcf--jfsqb-eth0" Nov 4 23:47:36.152494 containerd[1601]: 2025-11-04 23:47:36.129 [INFO][4874] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfsqb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jfsqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jfsqb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"996cd107-7b4a-4765-be02-ba532f9cecae", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-jfsqb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1d75aec1dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:36.152494 containerd[1601]: 2025-11-04 23:47:36.129 [INFO][4874] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfsqb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jfsqb-eth0" Nov 4 23:47:36.152494 containerd[1601]: 2025-11-04 23:47:36.129 [INFO][4874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1d75aec1dc ContainerID="fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfsqb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jfsqb-eth0" Nov 4 23:47:36.152494 containerd[1601]: 2025-11-04 23:47:36.134 [INFO][4874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfsqb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jfsqb-eth0" Nov 4 23:47:36.152494 containerd[1601]: 2025-11-04 23:47:36.135 [INFO][4874] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfsqb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jfsqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jfsqb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"996cd107-7b4a-4765-be02-ba532f9cecae", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3", Pod:"coredns-674b8bbfcf-jfsqb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1d75aec1dc", MAC:"d2:f3:ee:50:35:8c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:47:36.152494 containerd[1601]: 2025-11-04 23:47:36.147 [INFO][4874] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-jfsqb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jfsqb-eth0" Nov 4 23:47:36.178747 containerd[1601]: time="2025-11-04T23:47:36.177968539Z" level=info msg="connecting to shim fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3" address="unix:///run/containerd/s/7b7bbd8b81df427239ba63080ef871c231231fbf11686b0cbdabdd60c2b8415c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:47:36.219104 systemd[1]: Started cri-containerd-fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3.scope - libcontainer container fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3. Nov 4 23:47:36.237735 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:47:36.300320 containerd[1601]: time="2025-11-04T23:47:36.300267039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jfsqb,Uid:996cd107-7b4a-4765-be02-ba532f9cecae,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3\"" Nov 4 23:47:36.301364 kubelet[2821]: E1104 23:47:36.301296 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:36.310121 containerd[1601]: time="2025-11-04T23:47:36.310065872Z" level=info msg="CreateContainer within sandbox \"fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:47:36.324972 containerd[1601]: time="2025-11-04T23:47:36.324206636Z" level=info msg="Container c1a6f2e7d209af139431bd14a6fcb8421b1919e7c9003c4d4717540721a4056b: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:47:36.329611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3595995939.mount: Deactivated successfully. Nov 4 23:47:36.332544 containerd[1601]: time="2025-11-04T23:47:36.332508271Z" level=info msg="CreateContainer within sandbox \"fc0f00d9ed32edf89425e2b19ae6bae132c6e86a3f46ea281200806705ba14d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c1a6f2e7d209af139431bd14a6fcb8421b1919e7c9003c4d4717540721a4056b\"" Nov 4 23:47:36.333189 containerd[1601]: time="2025-11-04T23:47:36.333154683Z" level=info msg="StartContainer for \"c1a6f2e7d209af139431bd14a6fcb8421b1919e7c9003c4d4717540721a4056b\"" Nov 4 23:47:36.334204 containerd[1601]: time="2025-11-04T23:47:36.334177671Z" level=info msg="connecting to shim c1a6f2e7d209af139431bd14a6fcb8421b1919e7c9003c4d4717540721a4056b" address="unix:///run/containerd/s/7b7bbd8b81df427239ba63080ef871c231231fbf11686b0cbdabdd60c2b8415c" protocol=ttrpc version=3 Nov 4 23:47:36.342818 kubelet[2821]: E1104 23:47:36.342780 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:36.343368 kubelet[2821]: E1104 23:47:36.343337 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cb8dcb848-zz8td" podUID="7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff" Nov 4 23:47:36.343487 kubelet[2821]: E1104 23:47:36.343337 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8f987886-b4d7r" podUID="f48ec58e-62fc-4c79-8936-16e0e5b98045" Nov 4 23:47:36.364271 systemd[1]: Started cri-containerd-c1a6f2e7d209af139431bd14a6fcb8421b1919e7c9003c4d4717540721a4056b.scope - libcontainer container c1a6f2e7d209af139431bd14a6fcb8421b1919e7c9003c4d4717540721a4056b. Nov 4 23:47:36.413499 containerd[1601]: time="2025-11-04T23:47:36.413135026Z" level=info msg="StartContainer for \"c1a6f2e7d209af139431bd14a6fcb8421b1919e7c9003c4d4717540721a4056b\" returns successfully" Nov 4 23:47:36.477167 containerd[1601]: time="2025-11-04T23:47:36.477077008Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:36.478468 containerd[1601]: time="2025-11-04T23:47:36.478383277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:47:36.478539 containerd[1601]: time="2025-11-04T23:47:36.478419325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:47:36.478790 kubelet[2821]: E1104 23:47:36.478710 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:47:36.478861 kubelet[2821]: E1104 23:47:36.478788 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:47:36.479333 kubelet[2821]: E1104 23:47:36.479277 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tl7gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d8f987886-89zn9_calico-apiserver(22717a25-e0bd-4f8f-934c-4d9e328d23a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:36.480602 kubelet[2821]: E1104 23:47:36.480559 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8f987886-89zn9" podUID="22717a25-e0bd-4f8f-934c-4d9e328d23a6" Nov 4 23:47:36.656024 systemd[1]: Started sshd@9-10.0.0.25:22-10.0.0.1:48294.service - OpenSSH per-connection server daemon (10.0.0.1:48294). Nov 4 23:47:36.752475 sshd[5072]: Accepted publickey for core from 10.0.0.1 port 48294 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:47:36.755541 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:47:36.763722 systemd-logind[1579]: New session 10 of user core. Nov 4 23:47:36.769167 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 23:47:36.929686 sshd[5075]: Connection closed by 10.0.0.1 port 48294 Nov 4 23:47:36.930002 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Nov 4 23:47:36.935246 systemd[1]: sshd@9-10.0.0.25:22-10.0.0.1:48294.service: Deactivated successfully. Nov 4 23:47:36.937973 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 23:47:36.938858 systemd-logind[1579]: Session 10 logged out. Waiting for processes to exit. Nov 4 23:47:36.940792 systemd-logind[1579]: Removed session 10. Nov 4 23:47:37.291232 systemd-networkd[1490]: caliae4c733803f: Gained IPv6LL Nov 4 23:47:37.366046 kubelet[2821]: E1104 23:47:37.365980 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:37.367233 kubelet[2821]: E1104 23:47:37.366364 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:37.367480 kubelet[2821]: E1104 23:47:37.367430 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8f987886-89zn9" podUID="22717a25-e0bd-4f8f-934c-4d9e328d23a6" Nov 4 23:47:37.398503 kubelet[2821]: I1104 23:47:37.398397 2821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jfsqb" podStartSLOduration=47.398375101 podStartE2EDuration="47.398375101s" podCreationTimestamp="2025-11-04 23:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:47:37.380964371 +0000 UTC m=+54.328219630" watchObservedRunningTime="2025-11-04 23:47:37.398375101 +0000 UTC m=+54.345630360" Nov 4 23:47:38.186132 systemd-networkd[1490]: calic1d75aec1dc: Gained IPv6LL Nov 4 23:47:38.368421 kubelet[2821]: E1104 23:47:38.368375 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:39.369687 kubelet[2821]: E1104 23:47:39.369641 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:47:41.943758 systemd[1]: Started sshd@10-10.0.0.25:22-10.0.0.1:48298.service - OpenSSH per-connection server daemon (10.0.0.1:48298). Nov 4 23:47:41.996328 sshd[5112]: Accepted publickey for core from 10.0.0.1 port 48298 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:47:41.998495 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:47:42.003624 systemd-logind[1579]: New session 11 of user core. Nov 4 23:47:42.014180 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 23:47:42.142951 sshd[5115]: Connection closed by 10.0.0.1 port 48298 Nov 4 23:47:42.143312 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Nov 4 23:47:42.149293 systemd[1]: sshd@10-10.0.0.25:22-10.0.0.1:48298.service: Deactivated successfully. Nov 4 23:47:42.151699 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 23:47:42.152703 systemd-logind[1579]: Session 11 logged out. Waiting for processes to exit. Nov 4 23:47:42.153895 systemd-logind[1579]: Removed session 11. Nov 4 23:47:46.164988 containerd[1601]: time="2025-11-04T23:47:46.164931485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:47:46.513973 containerd[1601]: time="2025-11-04T23:47:46.513782508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:46.518280 containerd[1601]: time="2025-11-04T23:47:46.518201925Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:47:46.518280 containerd[1601]: time="2025-11-04T23:47:46.518246650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:47:46.518542 kubelet[2821]: E1104 23:47:46.518484 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:47:46.518938 kubelet[2821]: E1104 23:47:46.518547 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:47:46.518938 kubelet[2821]: E1104 23:47:46.518718 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frf4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7bfb77c96-hmxm7_calico-system(c35617ca-16ee-4ea4-b266-2395bc382e38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:46.520027 kubelet[2821]: E1104 23:47:46.519935 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfb77c96-hmxm7" podUID="c35617ca-16ee-4ea4-b266-2395bc382e38" Nov 4 23:47:47.157706 systemd[1]: Started sshd@11-10.0.0.25:22-10.0.0.1:35510.service - OpenSSH per-connection server daemon (10.0.0.1:35510). Nov 4 23:47:47.234788 sshd[5141]: Accepted publickey for core from 10.0.0.1 port 35510 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:47:47.237358 sshd-session[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:47:47.243193 systemd-logind[1579]: New session 12 of user core. Nov 4 23:47:47.254081 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 23:47:47.515750 sshd[5144]: Connection closed by 10.0.0.1 port 35510 Nov 4 23:47:47.515987 sshd-session[5141]: pam_unix(sshd:session): session closed for user core Nov 4 23:47:47.521666 systemd[1]: sshd@11-10.0.0.25:22-10.0.0.1:35510.service: Deactivated successfully. Nov 4 23:47:47.524062 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 23:47:47.524813 systemd-logind[1579]: Session 12 logged out. Waiting for processes to exit. Nov 4 23:47:47.525839 systemd-logind[1579]: Removed session 12. Nov 4 23:47:48.165476 containerd[1601]: time="2025-11-04T23:47:48.165405583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:47:48.481332 containerd[1601]: time="2025-11-04T23:47:48.481142427Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:48.482426 containerd[1601]: time="2025-11-04T23:47:48.482375001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:47:48.482560 containerd[1601]: time="2025-11-04T23:47:48.482458750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:47:48.482694 kubelet[2821]: E1104 23:47:48.482636 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:47:48.483098 kubelet[2821]: E1104 23:47:48.482703 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:47:48.483098 kubelet[2821]: E1104 23:47:48.482987 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:67da652ae71641aca2aebf2d1372af6d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qn6pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-944495598-7w457_calico-system(0be97d2f-7ecc-4809-a2c9-d583b48d2a01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:48.483205 containerd[1601]: time="2025-11-04T23:47:48.483011612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:47:48.838363 containerd[1601]: time="2025-11-04T23:47:48.838200655Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:48.871203 containerd[1601]: time="2025-11-04T23:47:48.871087271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:47:48.871203 containerd[1601]: time="2025-11-04T23:47:48.871142586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:47:48.871541 kubelet[2821]: E1104 23:47:48.871459 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:47:48.871541 kubelet[2821]: E1104 23:47:48.871535 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:47:48.871830 kubelet[2821]: E1104 23:47:48.871782 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fd88t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qfh2g_calico-system(b6136b3a-c7e7-4b68-a7f8-18b611db9e11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:48.872062 containerd[1601]: time="2025-11-04T23:47:48.871977745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:47:49.266869 containerd[1601]: time="2025-11-04T23:47:49.266794077Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:49.268222 containerd[1601]: time="2025-11-04T23:47:49.268139097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:47:49.268310 containerd[1601]: time="2025-11-04T23:47:49.268184164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:47:49.268522 kubelet[2821]: E1104 23:47:49.268475 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:47:49.268608 kubelet[2821]: E1104 23:47:49.268538 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:47:49.268950 kubelet[2821]: E1104 23:47:49.268866 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jl2hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cb8dcb848-zz8td_calico-apiserver(7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:49.269281 containerd[1601]: time="2025-11-04T23:47:49.268933971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:47:49.270309 kubelet[2821]: E1104 23:47:49.270264 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cb8dcb848-zz8td" podUID="7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff" Nov 4 23:47:49.621565 containerd[1601]: time="2025-11-04T23:47:49.621355964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:49.622786 containerd[1601]: time="2025-11-04T23:47:49.622717465Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:47:49.622933 containerd[1601]: time="2025-11-04T23:47:49.622785074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:47:49.623047 kubelet[2821]: E1104 23:47:49.622991 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:47:49.623047 kubelet[2821]: E1104 23:47:49.623050 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:47:49.623661 kubelet[2821]: E1104 23:47:49.623331 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qn6pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-944495598-7w457_calico-system(0be97d2f-7ecc-4809-a2c9-d583b48d2a01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:49.623813 containerd[1601]: time="2025-11-04T23:47:49.623359127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:47:49.624987 kubelet[2821]: E1104 23:47:49.624930 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-944495598-7w457" podUID="0be97d2f-7ecc-4809-a2c9-d583b48d2a01" Nov 4 23:47:49.948935 containerd[1601]: time="2025-11-04T23:47:49.948810224Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:50.024541 containerd[1601]: time="2025-11-04T23:47:50.024463399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:47:50.024541 containerd[1601]: time="2025-11-04T23:47:50.024507644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:47:50.024896 kubelet[2821]: E1104 23:47:50.024821 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:47:50.024976 kubelet[2821]: E1104 23:47:50.024898 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:47:50.025268 kubelet[2821]: E1104 23:47:50.025205 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fd88t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qfh2g_calico-system(b6136b3a-c7e7-4b68-a7f8-18b611db9e11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:50.025702 containerd[1601]: time="2025-11-04T23:47:50.025665009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:47:50.026451 kubelet[2821]: E1104 23:47:50.026393 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qfh2g" podUID="b6136b3a-c7e7-4b68-a7f8-18b611db9e11" Nov 4 23:47:50.373929 containerd[1601]: time="2025-11-04T23:47:50.373732447Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:50.478589 containerd[1601]: time="2025-11-04T23:47:50.478508434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:47:50.478784 containerd[1601]: time="2025-11-04T23:47:50.478555243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:47:50.479072 kubelet[2821]: E1104 23:47:50.478999 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:47:50.479072 kubelet[2821]: E1104 23:47:50.479063 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:47:50.479687 kubelet[2821]: E1104 23:47:50.479368 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsgzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dt8tc_calico-system(f1677806-4e2b-4950-9276-2412224d7bb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:50.479819 containerd[1601]: time="2025-11-04T23:47:50.479403540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:47:50.480979 kubelet[2821]: E1104 23:47:50.480882 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dt8tc" podUID="f1677806-4e2b-4950-9276-2412224d7bb8" Nov 4 23:47:50.830840 containerd[1601]: time="2025-11-04T23:47:50.830754443Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:50.836846 containerd[1601]: time="2025-11-04T23:47:50.836692165Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:47:50.836846 containerd[1601]: time="2025-11-04T23:47:50.836773440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:47:50.837082 kubelet[2821]: E1104 23:47:50.837023 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:47:50.837554 kubelet[2821]: E1104 23:47:50.837088 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:47:50.837554 kubelet[2821]: E1104 23:47:50.837283 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tl7gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d8f987886-89zn9_calico-apiserver(22717a25-e0bd-4f8f-934c-4d9e328d23a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:50.838578 kubelet[2821]: E1104 23:47:50.838508 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8f987886-89zn9" podUID="22717a25-e0bd-4f8f-934c-4d9e328d23a6" Nov 4 23:47:51.165441 containerd[1601]: time="2025-11-04T23:47:51.165056342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:47:51.536675 containerd[1601]: time="2025-11-04T23:47:51.536506591Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:47:51.659398 containerd[1601]: time="2025-11-04T23:47:51.659309689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:47:51.659398 containerd[1601]: time="2025-11-04T23:47:51.659318656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:47:51.659786 kubelet[2821]: E1104 23:47:51.659717 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:47:51.659849 kubelet[2821]: E1104 23:47:51.659788 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:47:51.660097 kubelet[2821]: E1104 23:47:51.660033 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62pfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d8f987886-b4d7r_calico-apiserver(f48ec58e-62fc-4c79-8936-16e0e5b98045): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:47:51.661266 kubelet[2821]: E1104 23:47:51.661194 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8f987886-b4d7r" podUID="f48ec58e-62fc-4c79-8936-16e0e5b98045" Nov 4 23:47:52.540695 systemd[1]: Started sshd@12-10.0.0.25:22-10.0.0.1:35522.service - OpenSSH per-connection server daemon (10.0.0.1:35522). Nov 4 23:47:52.594174 sshd[5161]: Accepted publickey for core from 10.0.0.1 port 35522 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:47:52.596011 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:47:52.601027 systemd-logind[1579]: New session 13 of user core. Nov 4 23:47:52.611147 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 23:47:52.727059 sshd[5164]: Connection closed by 10.0.0.1 port 35522 Nov 4 23:47:52.727438 sshd-session[5161]: pam_unix(sshd:session): session closed for user core Nov 4 23:47:52.737269 systemd[1]: sshd@12-10.0.0.25:22-10.0.0.1:35522.service: Deactivated successfully. Nov 4 23:47:52.739618 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 23:47:52.740608 systemd-logind[1579]: Session 13 logged out. Waiting for processes to exit. Nov 4 23:47:52.744712 systemd[1]: Started sshd@13-10.0.0.25:22-10.0.0.1:35532.service - OpenSSH per-connection server daemon (10.0.0.1:35532). Nov 4 23:47:52.746007 systemd-logind[1579]: Removed session 13. Nov 4 23:47:52.809137 sshd[5178]: Accepted publickey for core from 10.0.0.1 port 35532 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:47:52.810749 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:47:52.815954 systemd-logind[1579]: New session 14 of user core. Nov 4 23:47:52.823105 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 23:47:52.979146 sshd[5181]: Connection closed by 10.0.0.1 port 35532 Nov 4 23:47:52.982150 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Nov 4 23:47:52.990943 systemd[1]: sshd@13-10.0.0.25:22-10.0.0.1:35532.service: Deactivated successfully. Nov 4 23:47:52.994637 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 23:47:52.997992 systemd-logind[1579]: Session 14 logged out. Waiting for processes to exit. Nov 4 23:47:53.000026 systemd[1]: Started sshd@14-10.0.0.25:22-10.0.0.1:33676.service - OpenSSH per-connection server daemon (10.0.0.1:33676). Nov 4 23:47:53.001325 systemd-logind[1579]: Removed session 14. Nov 4 23:47:53.067436 sshd[5197]: Accepted publickey for core from 10.0.0.1 port 33676 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:47:53.069112 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:47:53.074453 systemd-logind[1579]: New session 15 of user core. Nov 4 23:47:53.086131 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 23:47:53.232576 sshd[5202]: Connection closed by 10.0.0.1 port 33676 Nov 4 23:47:53.232897 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Nov 4 23:47:53.238198 systemd[1]: sshd@14-10.0.0.25:22-10.0.0.1:33676.service: Deactivated successfully. Nov 4 23:47:53.240796 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 23:47:53.241671 systemd-logind[1579]: Session 15 logged out. Waiting for processes to exit. Nov 4 23:47:53.242852 systemd-logind[1579]: Removed session 15. Nov 4 23:47:58.247194 systemd[1]: Started sshd@15-10.0.0.25:22-10.0.0.1:33678.service - OpenSSH per-connection server daemon (10.0.0.1:33678). Nov 4 23:47:58.305199 sshd[5218]: Accepted publickey for core from 10.0.0.1 port 33678 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:47:58.307188 sshd-session[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:47:58.313119 systemd-logind[1579]: New session 16 of user core. Nov 4 23:47:58.323107 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 23:47:58.446999 sshd[5221]: Connection closed by 10.0.0.1 port 33678 Nov 4 23:47:58.447413 sshd-session[5218]: pam_unix(sshd:session): session closed for user core Nov 4 23:47:58.453120 systemd[1]: sshd@15-10.0.0.25:22-10.0.0.1:33678.service: Deactivated successfully. Nov 4 23:47:58.455362 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 23:47:58.456250 systemd-logind[1579]: Session 16 logged out. Waiting for processes to exit. Nov 4 23:47:58.458152 systemd-logind[1579]: Removed session 16. Nov 4 23:48:00.164359 kubelet[2821]: E1104 23:48:00.164247 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfb77c96-hmxm7" podUID="c35617ca-16ee-4ea4-b266-2395bc382e38" Nov 4 23:48:00.165129 kubelet[2821]: E1104 23:48:00.164699 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-944495598-7w457" podUID="0be97d2f-7ecc-4809-a2c9-d583b48d2a01" Nov 4 23:48:01.164821 kubelet[2821]: E1104 23:48:01.164746 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cb8dcb848-zz8td" podUID="7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff" Nov 4 23:48:03.169219 kubelet[2821]: E1104 23:48:03.169146 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8f987886-b4d7r" podUID="f48ec58e-62fc-4c79-8936-16e0e5b98045" Nov 4 23:48:03.169219 kubelet[2821]: E1104 23:48:03.169147 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qfh2g" podUID="b6136b3a-c7e7-4b68-a7f8-18b611db9e11" Nov 4 23:48:03.170520 kubelet[2821]: E1104 23:48:03.170434 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:48:03.390259 containerd[1601]: time="2025-11-04T23:48:03.390174243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db79753ced536ed20ef1b1468ccb41af746f4af7d6381afecb94bccf7bedd41a\" id:\"0c1b82fbf1936a1ee09b7fa6b3f2ce949ac56f60e73d47a3e85f349f9720cffe\" pid:5249 exit_status:1 exited_at:{seconds:1762300083 nanos:389687858}" Nov 4 23:48:03.460459 systemd[1]: Started sshd@16-10.0.0.25:22-10.0.0.1:34542.service - OpenSSH per-connection server daemon (10.0.0.1:34542). Nov 4 23:48:03.535932 sshd[5262]: Accepted publickey for core from 10.0.0.1 port 34542 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:48:03.538000 sshd-session[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:48:03.543262 systemd-logind[1579]: New session 17 of user core. Nov 4 23:48:03.553066 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 23:48:03.686951 sshd[5265]: Connection closed by 10.0.0.1 port 34542 Nov 4 23:48:03.687318 sshd-session[5262]: pam_unix(sshd:session): session closed for user core Nov 4 23:48:03.692022 systemd[1]: sshd@16-10.0.0.25:22-10.0.0.1:34542.service: Deactivated successfully. Nov 4 23:48:03.694284 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 23:48:03.695055 systemd-logind[1579]: Session 17 logged out. Waiting for processes to exit. Nov 4 23:48:03.696089 systemd-logind[1579]: Removed session 17. Nov 4 23:48:04.164482 kubelet[2821]: E1104 23:48:04.164439 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:48:06.164605 kubelet[2821]: E1104 23:48:06.164538 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8f987886-89zn9" podUID="22717a25-e0bd-4f8f-934c-4d9e328d23a6" Nov 4 23:48:06.165479 kubelet[2821]: E1104 23:48:06.164770 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dt8tc" podUID="f1677806-4e2b-4950-9276-2412224d7bb8" Nov 4 23:48:08.704454 systemd[1]: Started sshd@17-10.0.0.25:22-10.0.0.1:34550.service - OpenSSH per-connection server daemon (10.0.0.1:34550). Nov 4 23:48:08.822341 sshd[5284]: Accepted publickey for core from 10.0.0.1 port 34550 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:48:08.824886 sshd-session[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:48:08.830957 systemd-logind[1579]: New session 18 of user core. Nov 4 23:48:08.842245 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 23:48:08.996325 sshd[5287]: Connection closed by 10.0.0.1 port 34550 Nov 4 23:48:08.996877 sshd-session[5284]: pam_unix(sshd:session): session closed for user core Nov 4 23:48:09.002110 systemd[1]: sshd@17-10.0.0.25:22-10.0.0.1:34550.service: Deactivated successfully. Nov 4 23:48:09.004605 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 23:48:09.005747 systemd-logind[1579]: Session 18 logged out. Waiting for processes to exit. Nov 4 23:48:09.007264 systemd-logind[1579]: Removed session 18. Nov 4 23:48:10.164192 kubelet[2821]: E1104 23:48:10.164118 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:48:13.170235 containerd[1601]: time="2025-11-04T23:48:13.170171984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:48:13.520780 containerd[1601]: time="2025-11-04T23:48:13.520586465Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:48:13.521969 containerd[1601]: time="2025-11-04T23:48:13.521915685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:48:13.522035 containerd[1601]: time="2025-11-04T23:48:13.521986953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:48:13.522209 kubelet[2821]: E1104 23:48:13.522157 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:48:13.522672 kubelet[2821]: E1104 23:48:13.522225 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:48:13.522672 kubelet[2821]: E1104 23:48:13.522561 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frf4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7bfb77c96-hmxm7_calico-system(c35617ca-16ee-4ea4-b266-2395bc382e38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:48:13.522862 containerd[1601]: time="2025-11-04T23:48:13.522691715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:48:13.524084 kubelet[2821]: E1104 23:48:13.524048 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfb77c96-hmxm7" podUID="c35617ca-16ee-4ea4-b266-2395bc382e38" Nov 4 23:48:13.861693 containerd[1601]: time="2025-11-04T23:48:13.861541411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:48:13.863152 containerd[1601]: time="2025-11-04T23:48:13.863105916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:48:13.863217 containerd[1601]: time="2025-11-04T23:48:13.863181511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:48:13.863427 kubelet[2821]: E1104 23:48:13.863359 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:48:13.863492 kubelet[2821]: E1104 23:48:13.863427 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:48:13.863650 kubelet[2821]: E1104 23:48:13.863610 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jl2hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cb8dcb848-zz8td_calico-apiserver(7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:48:13.864810 kubelet[2821]: E1104 23:48:13.864754 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cb8dcb848-zz8td" podUID="7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff" Nov 4 23:48:14.015123 systemd[1]: Started sshd@18-10.0.0.25:22-10.0.0.1:37450.service - OpenSSH per-connection server daemon (10.0.0.1:37450). Nov 4 23:48:14.092234 sshd[5306]: Accepted publickey for core from 10.0.0.1 port 37450 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:48:14.094557 sshd-session[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:48:14.100529 systemd-logind[1579]: New session 19 of user core. Nov 4 23:48:14.111105 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 23:48:14.246789 sshd[5309]: Connection closed by 10.0.0.1 port 37450 Nov 4 23:48:14.247378 sshd-session[5306]: pam_unix(sshd:session): session closed for user core Nov 4 23:48:14.256896 systemd[1]: sshd@18-10.0.0.25:22-10.0.0.1:37450.service: Deactivated successfully. Nov 4 23:48:14.259851 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 23:48:14.260944 systemd-logind[1579]: Session 19 logged out. Waiting for processes to exit. Nov 4 23:48:14.264760 systemd[1]: Started sshd@19-10.0.0.25:22-10.0.0.1:37464.service - OpenSSH per-connection server daemon (10.0.0.1:37464). Nov 4 23:48:14.265842 systemd-logind[1579]: Removed session 19. Nov 4 23:48:14.318291 sshd[5323]: Accepted publickey for core from 10.0.0.1 port 37464 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:48:14.320070 sshd-session[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:48:14.325205 systemd-logind[1579]: New session 20 of user core. Nov 4 23:48:14.336057 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 23:48:14.659349 sshd[5326]: Connection closed by 10.0.0.1 port 37464 Nov 4 23:48:14.659775 sshd-session[5323]: pam_unix(sshd:session): session closed for user core Nov 4 23:48:14.669619 systemd[1]: sshd@19-10.0.0.25:22-10.0.0.1:37464.service: Deactivated successfully. Nov 4 23:48:14.671633 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 23:48:14.672522 systemd-logind[1579]: Session 20 logged out. Waiting for processes to exit. Nov 4 23:48:14.675383 systemd[1]: Started sshd@20-10.0.0.25:22-10.0.0.1:37466.service - OpenSSH per-connection server daemon (10.0.0.1:37466). Nov 4 23:48:14.676288 systemd-logind[1579]: Removed session 20. Nov 4 23:48:14.745915 sshd[5338]: Accepted publickey for core from 10.0.0.1 port 37466 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:48:14.747868 sshd-session[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:48:14.753662 systemd-logind[1579]: New session 21 of user core. Nov 4 23:48:14.760065 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 23:48:15.166194 containerd[1601]: time="2025-11-04T23:48:15.166111403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:48:15.517609 sshd[5341]: Connection closed by 10.0.0.1 port 37466 Nov 4 23:48:15.519225 sshd-session[5338]: pam_unix(sshd:session): session closed for user core Nov 4 23:48:15.522526 containerd[1601]: time="2025-11-04T23:48:15.522487645Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:48:15.525759 containerd[1601]: time="2025-11-04T23:48:15.525141489Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:48:15.525759 containerd[1601]: time="2025-11-04T23:48:15.525157640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:48:15.526338 kubelet[2821]: E1104 23:48:15.526255 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:48:15.526338 kubelet[2821]: E1104 23:48:15.526328 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:48:15.527280 kubelet[2821]: E1104 23:48:15.526472 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:67da652ae71641aca2aebf2d1372af6d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qn6pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-944495598-7w457_calico-system(0be97d2f-7ecc-4809-a2c9-d583b48d2a01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:48:15.529303 containerd[1601]: time="2025-11-04T23:48:15.529278032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:48:15.531999 systemd[1]: sshd@20-10.0.0.25:22-10.0.0.1:37466.service: Deactivated successfully. Nov 4 23:48:15.535013 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 23:48:15.538639 systemd-logind[1579]: Session 21 logged out. Waiting for processes to exit. Nov 4 23:48:15.544190 systemd[1]: Started sshd@21-10.0.0.25:22-10.0.0.1:37482.service - OpenSSH per-connection server daemon (10.0.0.1:37482). Nov 4 23:48:15.546983 systemd-logind[1579]: Removed session 21. Nov 4 23:48:15.602514 sshd[5364]: Accepted publickey for core from 10.0.0.1 port 37482 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:48:15.604038 sshd-session[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:48:15.609835 systemd-logind[1579]: New session 22 of user core. Nov 4 23:48:15.618066 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 23:48:15.854388 containerd[1601]: time="2025-11-04T23:48:15.854189685Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:48:15.872421 containerd[1601]: time="2025-11-04T23:48:15.872249793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:48:15.873015 containerd[1601]: time="2025-11-04T23:48:15.872899771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:48:15.873347 kubelet[2821]: E1104 23:48:15.873255 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:48:15.873929 kubelet[2821]: E1104 23:48:15.873786 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:48:15.874579 kubelet[2821]: E1104 23:48:15.874517 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qn6pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-944495598-7w457_calico-system(0be97d2f-7ecc-4809-a2c9-d583b48d2a01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:48:15.876001 kubelet[2821]: E1104 23:48:15.875947 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-944495598-7w457" podUID="0be97d2f-7ecc-4809-a2c9-d583b48d2a01" Nov 4 23:48:15.889522 sshd[5367]: Connection closed by 10.0.0.1 port 37482 Nov 4 23:48:15.889763 sshd-session[5364]: pam_unix(sshd:session): session closed for user core Nov 4 23:48:15.903483 systemd[1]: sshd@21-10.0.0.25:22-10.0.0.1:37482.service: Deactivated successfully. Nov 4 23:48:15.905677 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 23:48:15.909198 systemd-logind[1579]: Session 22 logged out. Waiting for processes to exit. Nov 4 23:48:15.911807 systemd[1]: Started sshd@22-10.0.0.25:22-10.0.0.1:37498.service - OpenSSH per-connection server daemon (10.0.0.1:37498). Nov 4 23:48:15.913477 systemd-logind[1579]: Removed session 22. Nov 4 23:48:15.970361 sshd[5379]: Accepted publickey for core from 10.0.0.1 port 37498 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:48:15.972617 sshd-session[5379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:48:15.979003 systemd-logind[1579]: New session 23 of user core. Nov 4 23:48:15.992171 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 23:48:16.116677 sshd[5382]: Connection closed by 10.0.0.1 port 37498 Nov 4 23:48:16.116940 sshd-session[5379]: pam_unix(sshd:session): session closed for user core Nov 4 23:48:16.122321 systemd[1]: sshd@22-10.0.0.25:22-10.0.0.1:37498.service: Deactivated successfully. Nov 4 23:48:16.126160 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 23:48:16.127343 systemd-logind[1579]: Session 23 logged out. Waiting for processes to exit. Nov 4 23:48:16.129852 systemd-logind[1579]: Removed session 23. Nov 4 23:48:16.165570 containerd[1601]: time="2025-11-04T23:48:16.165504298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:48:16.548978 containerd[1601]: time="2025-11-04T23:48:16.548834386Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:48:16.550281 containerd[1601]: time="2025-11-04T23:48:16.550237251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:48:16.550405 containerd[1601]: time="2025-11-04T23:48:16.550333747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:48:16.550585 kubelet[2821]: E1104 23:48:16.550502 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:48:16.550585 kubelet[2821]: E1104 23:48:16.550575 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:48:16.550965 kubelet[2821]: E1104 23:48:16.550749 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fd88t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qfh2g_calico-system(b6136b3a-c7e7-4b68-a7f8-18b611db9e11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:48:16.552928 containerd[1601]: time="2025-11-04T23:48:16.552880926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:48:16.927843 containerd[1601]: time="2025-11-04T23:48:16.927762713Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:48:16.929455 containerd[1601]: time="2025-11-04T23:48:16.929279770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:48:16.929455 containerd[1601]: time="2025-11-04T23:48:16.929377118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:48:16.929778 kubelet[2821]: E1104 23:48:16.929578 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:48:16.929778 kubelet[2821]: E1104 23:48:16.929658 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:48:16.929947 kubelet[2821]: E1104 23:48:16.929830 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fd88t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qfh2g_calico-system(b6136b3a-c7e7-4b68-a7f8-18b611db9e11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:48:16.931369 kubelet[2821]: E1104 23:48:16.931299 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qfh2g" podUID="b6136b3a-c7e7-4b68-a7f8-18b611db9e11" Nov 4 23:48:17.171445 containerd[1601]: time="2025-11-04T23:48:17.171037427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:48:17.523551 containerd[1601]: time="2025-11-04T23:48:17.523457303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:48:17.559397 containerd[1601]: time="2025-11-04T23:48:17.559266400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:48:17.559397 containerd[1601]: time="2025-11-04T23:48:17.559303372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:48:17.559959 kubelet[2821]: E1104 23:48:17.559605 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:48:17.559959 kubelet[2821]: E1104 23:48:17.559676 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:48:17.559959 kubelet[2821]: E1104 23:48:17.559851 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62pfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d8f987886-b4d7r_calico-apiserver(f48ec58e-62fc-4c79-8936-16e0e5b98045): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:48:17.561063 kubelet[2821]: E1104 23:48:17.561010 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8f987886-b4d7r" podUID="f48ec58e-62fc-4c79-8936-16e0e5b98045" Nov 4 23:48:18.164165 kubelet[2821]: E1104 23:48:18.164092 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:48:19.165496 containerd[1601]: time="2025-11-04T23:48:19.165439767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:48:19.528997 containerd[1601]: time="2025-11-04T23:48:19.528766848Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:48:19.530254 containerd[1601]: time="2025-11-04T23:48:19.530209063Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:48:19.530341 containerd[1601]: time="2025-11-04T23:48:19.530311311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:48:19.530531 kubelet[2821]: E1104 23:48:19.530467 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:48:19.530886 kubelet[2821]: E1104 23:48:19.530536 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:48:19.530886 kubelet[2821]: E1104 23:48:19.530691 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsgzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dt8tc_calico-system(f1677806-4e2b-4950-9276-2412224d7bb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:48:19.531964 kubelet[2821]: E1104 23:48:19.531876 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dt8tc" podUID="f1677806-4e2b-4950-9276-2412224d7bb8" Nov 4 23:48:20.165016 containerd[1601]: time="2025-11-04T23:48:20.164849427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:48:20.634095 containerd[1601]: time="2025-11-04T23:48:20.634016893Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:48:20.729204 containerd[1601]: time="2025-11-04T23:48:20.729132086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:48:20.729525 containerd[1601]: time="2025-11-04T23:48:20.729449321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:48:20.729848 kubelet[2821]: E1104 23:48:20.729783 2821 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:48:20.730258 kubelet[2821]: E1104 23:48:20.729855 2821 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:48:20.730258 kubelet[2821]: E1104 23:48:20.730038 2821 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tl7gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d8f987886-89zn9_calico-apiserver(22717a25-e0bd-4f8f-934c-4d9e328d23a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:48:20.731627 kubelet[2821]: E1104 23:48:20.731581 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8f987886-89zn9" podUID="22717a25-e0bd-4f8f-934c-4d9e328d23a6" Nov 4 23:48:21.144227 systemd[1]: Started sshd@23-10.0.0.25:22-10.0.0.1:37512.service - OpenSSH per-connection server daemon (10.0.0.1:37512). Nov 4 23:48:21.196199 sshd[5398]: Accepted publickey for core from 10.0.0.1 port 37512 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:48:21.198360 sshd-session[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:48:21.203971 systemd-logind[1579]: New session 24 of user core. Nov 4 23:48:21.215139 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 23:48:21.338415 sshd[5401]: Connection closed by 10.0.0.1 port 37512 Nov 4 23:48:21.338778 sshd-session[5398]: pam_unix(sshd:session): session closed for user core Nov 4 23:48:21.344013 systemd[1]: sshd@23-10.0.0.25:22-10.0.0.1:37512.service: Deactivated successfully. Nov 4 23:48:21.346276 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 23:48:21.347306 systemd-logind[1579]: Session 24 logged out. Waiting for processes to exit. Nov 4 23:48:21.348637 systemd-logind[1579]: Removed session 24. Nov 4 23:48:26.164879 kubelet[2821]: E1104 23:48:26.164793 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfb77c96-hmxm7" podUID="c35617ca-16ee-4ea4-b266-2395bc382e38" Nov 4 23:48:26.353322 systemd[1]: Started sshd@24-10.0.0.25:22-10.0.0.1:32836.service - OpenSSH per-connection server daemon (10.0.0.1:32836). Nov 4 23:48:26.409412 sshd[5418]: Accepted publickey for core from 10.0.0.1 port 32836 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:48:26.411162 sshd-session[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:48:26.416464 systemd-logind[1579]: New session 25 of user core. Nov 4 23:48:26.428050 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 23:48:26.562165 sshd[5421]: Connection closed by 10.0.0.1 port 32836 Nov 4 23:48:26.562501 sshd-session[5418]: pam_unix(sshd:session): session closed for user core Nov 4 23:48:26.569191 systemd[1]: sshd@24-10.0.0.25:22-10.0.0.1:32836.service: Deactivated successfully. Nov 4 23:48:26.571501 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 23:48:26.572486 systemd-logind[1579]: Session 25 logged out. Waiting for processes to exit. Nov 4 23:48:26.573992 systemd-logind[1579]: Removed session 25. Nov 4 23:48:28.165578 kubelet[2821]: E1104 23:48:28.165462 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cb8dcb848-zz8td" podUID="7e6b05bc-c5cf-42a7-8a57-876b8ddab4ff" Nov 4 23:48:30.165505 kubelet[2821]: E1104 23:48:30.165391 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8f987886-b4d7r" podUID="f48ec58e-62fc-4c79-8936-16e0e5b98045" Nov 4 23:48:30.166493 kubelet[2821]: E1104 23:48:30.166430 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qfh2g" podUID="b6136b3a-c7e7-4b68-a7f8-18b611db9e11" Nov 4 23:48:31.169160 kubelet[2821]: E1104 23:48:31.169080 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-944495598-7w457" podUID="0be97d2f-7ecc-4809-a2c9-d583b48d2a01" Nov 4 23:48:31.576788 systemd[1]: Started sshd@25-10.0.0.25:22-10.0.0.1:32844.service - OpenSSH per-connection server daemon (10.0.0.1:32844). Nov 4 23:48:31.630838 sshd[5434]: Accepted publickey for core from 10.0.0.1 port 32844 ssh2: RSA SHA256:r1v+IVG44gLRonaqNkXkWsXsI301PjTJuuv88RJKzKI Nov 4 23:48:31.632388 sshd-session[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:48:31.637307 systemd-logind[1579]: New session 26 of user core. Nov 4 23:48:31.643229 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 4 23:48:31.916657 sshd[5437]: Connection closed by 10.0.0.1 port 32844 Nov 4 23:48:31.917609 sshd-session[5434]: pam_unix(sshd:session): session closed for user core Nov 4 23:48:31.923142 systemd[1]: sshd@25-10.0.0.25:22-10.0.0.1:32844.service: Deactivated successfully. Nov 4 23:48:31.925719 systemd[1]: session-26.scope: Deactivated successfully. Nov 4 23:48:31.926836 systemd-logind[1579]: Session 26 logged out. Waiting for processes to exit. Nov 4 23:48:31.928491 systemd-logind[1579]: Removed session 26. Nov 4 23:48:32.164674 kubelet[2821]: E1104 23:48:32.164601 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d8f987886-89zn9" podUID="22717a25-e0bd-4f8f-934c-4d9e328d23a6" Nov 4 23:48:33.167207 kubelet[2821]: E1104 23:48:33.167156 2821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dt8tc" podUID="f1677806-4e2b-4950-9276-2412224d7bb8" Nov 4 23:48:33.383841 containerd[1601]: time="2025-11-04T23:48:33.383791250Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db79753ced536ed20ef1b1468ccb41af746f4af7d6381afecb94bccf7bedd41a\" id:\"cfba8c8dfea496302b645c1068500511e939d4280d4b7ba588cf524e993199ab\" pid:5462 exited_at:{seconds:1762300113 nanos:383368999}" Nov 4 23:48:33.385976 kubelet[2821]: E1104 23:48:33.385947 2821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"